With Gaetano Di Chiara's work, "Scienza in rete" relaunches the debate on the assessment criteria overseeing research, which is important for the future of universities and research in our country. We invite readers to participate in the discussion by leaving their comments.
With the intention of "promoting and supporting the heightened quality of the activities of state universities and to improve effectiveness when using resources" (law of the 9 January 2009 paragraph 1), 474 million euros have been distributed amongst universities for research assessment. Amongst these, 70% were assigned based upon participation in national, European and international public financing (PRIN and FIRB).
It is true to say that measuring quality and productivity of scientific research based upon the amount of public financings obtained would be like assessing the effectiveness of waste disposal based upon the funds envisaged for this purpose. Naples would no doubt win gold! The same discussion can be applied to healthcare and so forth. Hence the reason why the allocation of funds to universities as rewards represents a perfect example of the fact that proper merit policies are strongly dependent on the application of suitable assessment criteria: if they are deceptive, the merit system may have the opposite effect. In the case of research assessment, this must begin with the analysis of its products, such as scientific works and even patented works. Financing as an assessment parameter has only one real purpose in terms of products: in the end, it should calculate the cost/benefit ratio.
But how do you assess products? In order to select a series of research projects in light of a funding program concentrating on specific themes, as it is the case with European research programs or national strategic programs, the present system used is that of the "study session", the direct exchange between members of an expert commission, which may also be international. It is evident that this method cannot be applied on a broader scale, for instance in the assessment of the scientific production of a large number of scientific works within the same disciplinary sector. This is the very case faced by the CIVR, founded in 2004 to assess national research for the three year period, 2001-2003 and it is also the one used to select PRIN research projects. In this case, the assessment of each research product was entrusted to at least two anonymous assessors, according to a plan and methods decided by an expert panel representing each of the 14 disciplinary areas or a guarantor committee. Once contacted, the experts present their opinions to the panel and work together anonymously via the Internet. The problems with this method relate to the extreme subdivision of the assessments, which makes it difficult, if not impossible, to apply a uniform assessment method and enables, by way of the anonymous person, the arise of the most extreme subjective opinions.
The problems associated with the use of this procedure led to the introduction, relative to the assessment of research on a national and international scale, of methods based upon bibliometrical indicators which indicate the impact of works on the scientific community, quantified by means of quotations. The first, older and still widely used bibliometric index is the impact factor (IF), whose obsolence as an indicator to evaluate research is exclusively practical: the availability of databases which freely provide work quotations and software able to instantly calculate, using these very quotations, a series of parameters which are useful for assessing the research.
At present, three different databases are available, two at a cost, ISI Webof Science (Wos) and Scopus, and one for free, Google Scholar, the data of which can be analyzed using two different software types, Publish or Perish and a Mozilla Firefox add-on.
Amongst the parameters that can be drawn, the most reliable is the author Hirsch indicator (h), which represents the number of works created by the author that have achieved quotations in excess of the number in question. "h" represents a genuine brilliant idea, given that it accurately expresses the consistency and reliability of the impact of a researcher on scientific productions.
There is no doubt that the idea of expressing the value of life-long research or the productivity of an entire establishment using a single number is limited. However, in order to assess research, the Hirsch index remains preferable to other parameters in terms of its inclusion in national or European projects, as currently employed by the Ministry of Universities.
We are convinced that the classification of institutes based upon an objective, clear and accurate parameter such as h is able to contribute, to eventually become the standard, in securing a policy of merit when financing research and Universities within Italy.