fbpx Open access to particle physics | Science in the net

Open access to particle physics

Primary tabs

Read time: 11 mins

Beginning from January 2014, 90% of scientific publications within the field of high energy physics will be freely available to anybody able to access the Internet. After 6 years of negotiations with 12 scientific publishers and more than 200 partners worldwide, the SCOAP3 Consortium (Sponsoring Consortium for Open Access Publishing in Particle Physics) is now ready to initiate the open access project which involves approximately 7,000  scientific articles, which are published within the particle physics sector every year.

The method is both simple and innovative. At present, specialist journals make profits from the sale (costly) of subscriptions, which allow people to read the articles: generally speaking, only libraries and institutions are permitted to purchase them. To ensure free access, subscriptions must be stopped; journals do however require a source of income as repayment for the work completed. The SCOAP3 Consortium  has suggested accumulating all monetary funds actually provided from entities to pay for subscriptions as one installment and using this to call the publishing houses to a "tender" to select the publishing house which will deal with its publication. The entity which guarantees definitive open access, demonstrates the best quality/price ratio relative to the published article, will be selected to receive financing. Specialist journals carry out the crucial task, on behalf of the scientific community, of assessing and filtering out research quality using the peer review service: in the instance where a particular piece of work on a specific theme is proposed to appear in the journal, it is sent to other researchers, experts on this same subject, for checking and assessment. The journal publishes the article subsequent to its approval. We are not simply dealing with a subject confined to the scientific community: the number of peer reviewed publications also underpin the allocation of public moneys - therefore of citizens - provided to science.
The SCOAP3 agreement was feasible within the high energy sector owing to the presence of the CERN institution, which plays the role of mediator between the journals and their purchasers.

Salvatore Mele, from Naples, Head of the sector dealing with open access in CERN, has been overseeing negotiations right from the very start. Qualified as a physicist before working within information science, he worked, beginning in the nineties, for INFN and CERN, on the LEDP L3 experiment and briefly for CMS in LHC.

We asked him to speak of the main implications of this agreement.

Doctor Mele, what is new about the agreement reached?

There are two specifically innovative factors: the first regards information science, the second, the structure of scientific publishing houses, which currently takes the form of an industry based upon the sale of content.

Let me explain myself more clearly. With regards the first factor, there is no doubt that making research results publicly accessible provides social value: no politician can deny this, primarily in a socio-political context in which people inform themselves on Wikipedia and “Pirate Parties” secure seats in European Parliament and in various European democracies. In the case of scientific publishing in itself an evident paradox: public money is spent on research, to pay public salaries and public employees, which with further public funds, generate information. This then is passed on to publishing houses, which, after a selection process, make their profits re-selling it. It may not be about making profits, but a publishing house needs money to function: if the company does not have shareholders, it has to guarantee a minimum profit percentage per year (figures of approximately 6% are not unheard of).

However, watching the process from afar, it is evident that funds destined for the public, at the end of the cycle, are taken away from the general good of the public. It is not just a problem which affects the general public, whom may or may not be interested in high energy physics: even "educated" public sectors are sidelined owing to the continually rising costs of subscriptions - to guarantee the 6% mentioned earlier, or because the volume of articles grows, so consequently, so does the cost: increasing numbers of institutions and libraries can no longer afford these subscriptions. A vicious circle is created - those who continue to subscribe have to pay more to compensate for those canceling subscriptions. In some sectors, the relinquishment of access to scientific information is particularly serious: epidemiology, hospital best practice journals, which are only read by those involved in such areas, come to mind.

It is possible to already read scientific articles focused on the high energy physics sector on arXiv.

There are sectors, like ours, where information, is in some way, accessible to all those interested. arXiv publishes, up to a year before the release of the official publication, preprints, that is, articles which have not yet been subjected to peer review. Similarly, in other areas, scientific publications exist which can be considered open access seeing as they rely on sponsors who pay the publication costs, or where the authors pay them themselves. Universities too take advantage of keeping articles produced open to the public to demonstrate their production capacity. The above mentioned is not systematic: it is a case of knowing who publishes openly and patiently making your way into the sphere of scientific journals. SCOAP3 takes a new angle ensuring that all finalized, peer reviewed journals, specific to an entire sector, will be made public. This is the first time that this has been achieved in systematic fashion.

How will the copyright be protected in this way?

The copyright remains with the authors. At present, scientific publishing is based upon the fact that the copyright is transferred to the publisher, who can then make earnings on the content sold. The final article, with our agreement, will be placed under the Creative Commons BY license, with attribution to the author. We must not disregard the importance of the fact that a majority of scientists and researchers, are, in essence “in illegal territory…”

Really? Scientists are in illegal territory? 

Yes, because sooner or later, researchers, in good faith, will use a graphic or a table taken from a specific publication and use it in a public presentation or in another article or book: legally speaking, it no longer belongs to them, and so they must request authorization (and perhaps pay) from the publisher to whom the rights were granted in order to re-use the material...in actual fact...very few do this!

What other novelties do these licenses provide?

If, in the space of just a year, over a million and a half scientific journals (only in English) are produced in the fields of science, technology, medicine, it is clear that no human being would be able to read all these articles, whilst a robot could accomplish this easily. Machines can be deigned which are able to read articles and understand the relationship between cause and effect - such a protein is related to such an illness, or such a drug is effective or ineffectual. These machines which carry out text mining already exist in experimental stages, but they cannot always be used! At present and generally speaking, a copy of articles cannot be made on computers, which means articles can only be read online: once the subscription expires, it is no longer in your possession , which is quite different than with the hard copy journals. Text mining is not permitted without authorization. Even if it was carried out, it would be futile, because to be comprehensive, it must be carried out on all articles within the sector, those belonging to different publishing houses. Just for your information, there are rumors that pharmaceutical publishers have special subscriptions for text mining.

You mentioned a second novelty…

Yes: there exists a much more subtle novelty which has caught lots of attention, for example, from financial analysts. At present, scientific publishers apply a filter, by means of the peer review, to the majority of publications produced each year. We can argue over whether this is a good filter or not, but at the moment, it is the best we have: the entire academic and financial allocation system is based upon this filter and it represents a system with tremendous force, so, for now, it cannot be eliminated. In a field like ours, where 93% of scientific writing has been on arXiv for years, the role of this filter is even clearer. The publishing industry is currently based upon the production or purchase of content: the Bloomsbury publishing house purchases the rights to Harry Potter from J.K Rowling, after which it sells the content to recover the costs and make a profit. The publishing sphere has been working this way for four-hundred years. Within our sector, we work with the paradox that, seeing as we have the articles available for up to a year before their publication, to obtain the filtering service we grant the articles to the publishing house, which makes a profit by re-selling the content already freely available elsewhere.
The beauty and novelty of SCOAP3 lies with the fact that it makes explicit that you are not paying for the content but the filtering service, unhinging the concept that the publishing house is a content industry and finally bringing scientific publishing into the 21st Century, where industries forming part of the knowledge economy, are those providing extremely high added value. This concept is the most mind boggling..
We are the first to be able to achieve this, seeing as our content is already public; it is extremely clear that there is a fine line between content and the public, the peer review filter, which represents the added value of publishing, for which we pay.

It is about accepting the paradigm shift.

How does the SCOAP3 method work exactly?

After having secured the financing commitments of institutions and libraries worldwide, we then set up a tender process, within the context of which, each journal communicated its publication cost of an article. We assessed the quotes, combining the price with other factors: the journal standard - the so-called impact factor, the copyright quantity which remained with the author and access to the information - a PDF drawn up by a robot is much harder to read than an formated XML file. The higher the quality and lower the price, the more incentive to finance the journal. The Consortium decided to allocate 10 million euros in total: once all the offers were ordered according to decreasing quality and increasing price, we allocated this fund. Seeing as it is a fixed, finalized amount of money, not everybody could be selected: the surprising thing is that almost all the publishers within our field were involved: the advantages of competition!

The three novelties: free and universal access (ranging from copyright...to re-use rights), the transformation from a content market to a services market - this is the last industry having to take this jump as all the others have already done so - and proof that a competitive process where average prices for articles is possible. Who would have thought such a model was possible: we were able to bring it to life in collaboration with the publishing houses!

What was the greatest challenge as part of this process? Convincing the publishing houses?

There were 12 publishing houses, and more than 40 countries.

Seeing that the project did not involve generating money but using money already in existence, it was necessary to identify from where the money would be sourced to pay for the subscriptions. In some countries, two or three institutions were involved, in others for example, in the USA, subscriptions had been purchased by thousands upon thousands of libraries and universities. We had to speak to all of them to explain to them that everything that had been carried out in the last 350 years in the field of scientific publishing was incorrect. On the one hand, we has to convince the 12 publishing houses to change their business model, for this field in particular, whilst on the other hand, it was necessary to speak to representatives of institutions and libraries in all 29 countries currently subscribed to SCOAP3. Least forgetting those where discussions are currently still underway. At a certain point, all the elements had to come together: what is CERN and what does it represent, the economic element (a new model) and the typically financial aspect (there is money tied up which must be released to be used). We managed to come up with an agreement in the 29 countries in order to be able to propose the project to the publishing houses.

How much time was spent working on the project?

We developed the model in 2007 and received approximately 40% of the financial commitments within Europe - the first three countries to offer these were Italy, Germany and France all within the space of a week - after which, from 2008 to 2010 the number of countries increased; within the United States there is no single body which can make these financial commitments so discussions were conducted with 180 different universities. This patient waiting game to secure agreement, the coordination between the three entities, institutions, CERN and publishing houses - was the most difficult element.

The signing of an agreement means that you have been successful...what is left to be done?

We managed to secure the interest of 200 partners in 29 countries to enable the execution of a publishing house tender for offers. Now we must return to those countries who have not subscribed - Brazil, Poland, Russia, Mexico, Chile, Argentine, India and Taiwan - and finally, convert all these commitments into real money in order to sign agreements and genuinely begin publishing.

The complexity of the entire process is similar to that necessary to create one of CERN's most telling aspects: a group manufactures muon chambers where somebody deals with the electronics, where somebody else deals with the electronics but not the calorimeter, where somebody else pays the gas bill...by the end, they all come together with their relative commitments and the machine is made. The strength lies in the collective!



Scienza in rete è un giornale senza pubblicità e aperto a tutti per garantire l’indipendenza dell’informazione e il diritto universale alla cittadinanza scientifica. Contribuisci a dar voce alla ricerca sostenendo Scienza in rete. In questo modo, potrai entrare a far parte della nostra comunità e condividere il nostro percorso. Clicca sul pulsante e scegli liberamente quanto donare! Anche una piccola somma è importante. Se vuoi fare una donazione ricorrente, ci consenti di programmare meglio il nostro lavoro e resti comunque libero di interromperla quando credi.


prossimo articolo

Why have neural networks won the Nobel Prizes in Physics and Chemistry?

This year, Artificial Intelligence played a leading role in the Nobel Prizes for Physics and Chemistry. More specifically, it would be better to say machine learning and neural networks, thanks to whose development we now have systems ranging from image recognition to generative AI like Chat-GPT. In this article, Chiara Sabelli tells the story of the research that led physicist and biologist John J. Hopfield and computer scientist and neuroscientist Geoffrey Hinton to lay the foundations of current machine learning.

Image modified from the article "Biohybrid and Bioinspired Magnetic Microswimmers" https://onlinelibrary.wiley.com/doi/epdf/10.1002/smll.201704374

The 2024 Nobel Prize in Physics was awarded to John J. Hopfield, an American physicist and biologist from Princeton University, and to Geoffrey Hinton, a British computer scientist and neuroscientist from the University of Toronto, for utilizing tools from statistical physics in the development of methods underlying today's powerful machine learning technologies.