fbpx Evaluating, even the USA is having a difficult time | Science in the net

Evaluating, even the USA is having a difficult time

Primary tabs

Read time: 3 mins

Evaluating research isn't easy. Not even in the United States, which is a country considered to be ahead of its time in this field. And not even for the US National Institutes of Health (NIH), which uses an evaluation system considered to be amongst the best in the world. Only a small part of the projects chosen by the American Federal Agency have proven to be excellent. And only a small part of the American projects have passed the NIH evaluation system.

This is the conclusion of the report that Joshua M. Nicholson, researcher at the Department of Biological Sciences at the Virginia Tech in Blacksburg Virginia, and John P. A. Ioannidis, of the Stanford Prevention Research Center in Stanford, California, have recently published on Nature. According to the two researchers, the USA National Institutes of Health is the single most important financing institution in the world in bio-medic research. Using its sophisticated peer review evaluation system, between 2002 and 2011, the federal agency promoted 460.000 research projects, by spending roughly 200 billion dollars (the same amount spent by Italy for all the scientific research, both public and private).

With what qualitative results?

The question does not allow for simple answers because quality is not easily definable and, let alone, measurable. Nevertheless, the two researchers tried to give an answer, despite knowing the method's limitations. Ultimately, they examined 1.380 scientific highly cited articles, among the 20 million published worldwide between 2001 and 2012 that were classified in the Scopus database. The highly cited articles are the ones that received over 1.000 citations. It's difficult to establish whether they're the best article or not, but they are the most influential ones. Amongst the 1.380 articles, 700 were classified as bio-medical and had at least one author who is affiliated to a U.S. research institute. The 700 articles were written by 1,172 authors who were the first or last of the signatories, or even the only author. Far too many to be analyzed effectively. As a result, Nicholson and Ioannidis chose 158 of the 700 articles at random, which were written by a total of 262 “eligible authors”, or rather authors belonging to American institutions, who were the first, last (or the only) authors. Well, only 104 of them, 39,7% of the total, had a “highly cited” research funded by NIH. A total of 158 “highly cited” American authors, 60,3% of the total, carried out their highly influencing research using funds that were not issued by the NIH. Whatever the the reason may be, the predominant part of the elite American research in the bio-medical field was not funded by NIH resources. Consequently, NIH's influence on elite bio-medical research must be slightly reconsidered.

But there's more. Among the 1,172 “highly cited” American authors in the bio-medical field, only 72 (6%) form part of the study group that awards the NIH funds. However the NIH study group is made up of 8.517 members. This means that only 0,8% of the researchers who issue NIH funds are considered “highly cited” and “influential”. All of this – stated Nicholson and Ioannidis – is somewhat in contrast with NIH's mission: to use the best researchers for   financing the best projects. Neither the first nor the second of this assumptions is being respected.

Why? It's difficult to say, seeing as how up to now the method used to award grants by the NIH was considered amongst the best in the world. The truth is, probably, that every system has defects. And every system of vast proportions ends up promoting research projects which fall in what Thomas Khun – who by the way, 50 years ago wrote the fundamental The Structure of Scientific Revolutions – called “the normal science” and which we, more modestly, can consider as being more conformist. A trend which prevails, inevitably (?), even in selecting those who select.

If Nicholson and Ioannidis are right, their analysis demonstrates, once again, that we need to reflect more profoundly on quality in an era where research is characterized by quantity (of resources, of researchers, of output).


Scienza in rete è un giornale senza pubblicità e aperto a tutti per garantire l’indipendenza dell’informazione e il diritto universale alla cittadinanza scientifica. Contribuisci a dar voce alla ricerca sostenendo Scienza in rete. In questo modo, potrai entrare a far parte della nostra comunità e condividere il nostro percorso. Clicca sul pulsante e scegli liberamente quanto donare! Anche una piccola somma è importante. Se vuoi fare una donazione ricorrente, ci consenti di programmare meglio il nostro lavoro e resti comunque libero di interromperla quando credi.


prossimo articolo

Why have neural networks won the Nobel Prizes in Physics and Chemistry?

This year, Artificial Intelligence played a leading role in the Nobel Prizes for Physics and Chemistry. More specifically, it would be better to say machine learning and neural networks, thanks to whose development we now have systems ranging from image recognition to generative AI like Chat-GPT. In this article, Chiara Sabelli tells the story of the research that led physicist and biologist John J. Hopfield and computer scientist and neuroscientist Geoffrey Hinton to lay the foundations of current machine learning.

Image modified from the article "Biohybrid and Bioinspired Magnetic Microswimmers" https://onlinelibrary.wiley.com/doi/epdf/10.1002/smll.201704374

The 2024 Nobel Prize in Physics was awarded to John J. Hopfield, an American physicist and biologist from Princeton University, and to Geoffrey Hinton, a British computer scientist and neuroscientist from the University of Toronto, for utilizing tools from statistical physics in the development of methods underlying today's powerful machine learning technologies.