CMS detector being built by scientists at CERN.

Research at multinational facilities such as CERN, Europe’s particle-physics laboratory, can lead to studies with huge author lists.Credit: Adam Hart-Davis/SPL

The existence of the Higgs boson was first posited in a trio of papers in 1964. Two of those1,2 were authored solely by UK theoretical physicist Peter Higgs and the other3 was co-authored by his US and Belgian counterparts Robert Brout and François Englert.

Nearly half a century later, the experimental confirmation that the Higgs field existed was published in a paper4 with 2,932 authors. Three years after that, a paper5 detailing a more accurate measurement of the mass of the Higgs boson set a new record for the highest number of authors on a single paper: 5,154.

Then the COVID-19 pandemic broke that record, with 15,025 co-authors on a research paper6 examining the effect of SARS-CoV-2 vaccination on post-surgical COVID-19 infections and mortality.

The term ‘hyperauthorship’ is credited to information scientist Blaise Cronin7 at Indiana University in Bloomington, who used it in a 2001 publication to describe papers with 100 or more authors. But with the rise of large international and multi-institutional scientific collaborations — such as the ATLAS consortium behind the discovery of the Higgs boson — papers with hundreds, even thousands, of authors are becoming more common. There are many legitimate reasons for this shift, but it is raising questions — and concerns — about the nature of authorship and the impact that hyperauthorship has on the metrics of scientific achievement.

A 2019 report from the Institute for Scientific Information (ISI), part of citation-analytics company Clarivate, noted that the number of papers with more than 100 authors doubled between 1998 and 2011, from around 300 to around 600 (see go.nature.com/3I5j8zt). When the authors of the report looked at how the rates of different levels of hyperauthorship had changed between two five-year time periods — 2009 to 2013 and 2014 to 2018 — they saw that the greatest increase was in articles with more than 1,000 authors.

It’s a big change from the 1980s and is driven largely by an increase in international collaboration in science, says Jonathan Adams, chief scientist at the ISI and co-author of the report. In the 1980s, most scientific papers — more than 90% — listed authors from a single country. That changed around the mid-2000s, with a notable shift to bilateral collaborations between researchers from two nations, Adams says, and “as soon as you get bilateralism, of course the authorship count inevitably goes up”. Then came a rise in collaborations involving multiple nations, with a particular increase in papers involving up to 30 countries. The expansion of multi-authorship and hyperauthorship isn’t limited to particular countries or driven by national policies. “It’s simply the way in which science and research generally has changed,” he says.

An analysis of data in the Nature Index, which tracks research publications in 82 high-quality natural-science journals, also reveals a substantial increase in certain fields in the number of papers with 50 or more authors. For example, in medical and health sciences, the number of such papers rose from 58 in 2015 to 203 in 2021, an increase of 250%; this included a 90% increase from 2019 to 2020, the first year of the COVID-19 pandemic. Physical sciences, meanwhile, was the most common field of research for these multi-authored papers, with 335 articles in 2021, an 18% increase on 2015.

These changes reflect the growing need for large research groups, spread across different types of institution and geographies, to answer complex questions. They also reflect a desire for more-inclusive authorship that recognizes researchers from backgrounds that might have been overlooked in the past. The changes might also be the result of funding from bodies such as the European Research Council, which aids and encourages multinational collaborations.

But hyperauthorship creates challenges for researchers and for the journals that publish their work. Coordinating so many individual contributions across a multitude of institutions and nations is an enormous logistical feat. And hyperauthorship is raising philosophical questions about what it means to be an author of a research paper, and who has the right to — and need for — acknowledgement.

The need for statistical power

A key problem in studies that explore the role of genetics in mental-health disorders is statistical power, says Sarah Medland, a psychiatric geneticist at the QIMR Berghofer Medical Research Institute in Brisbane, Australia, and that’s one reason that her field lends itself to hyperauthorship. To examine the effects of genetic variants on the structure and function of the brain, she and her colleagues often use genome-wide association studies — in which large numbers of genomes are scanned for genes that correlate to particular diseases or traits — as well as magnetic resonance imaging (MRI) and psychiatric assessments. But the neurological effect of a single genetic variant can be extremely small, so studies need a lot of participants to find whether effects are statistically significant and replicable. The bigger the studies, the more expensive they are, especially when MRI is involved.

Frustrated by this lack of statistical power, Medland and her colleagues formed ENIGMA — Enhancing Neuro Imaging Genetics through Meta Analysis — a consortium that brings together researchers in imaging genomics, neurology and psychiatry from 50 working groups around the world.

ENIGMA generates papers that are meta-analyses of unpublished data from individual research teams within those working groups. “All of those groups could write their own paper, but if they did that they would be polluting the literature with all of these underpowered studies,” Medland says. Instead, they contribute those unpublished data, or a summary of them, to the meta-analyses. The end result is studies that have hundreds of contributors but are large enough to generate statistically significant and meaningful findings.

Hyperauthorship such as this comes about through good communication, both online and in person, says Adams. It often starts at international conferences, where researchers realize that their closest potential collaborators “are certainly not people down the corridor” of their institution, and “probably not people in the same country”, but instead are scientists in their field working abroad. “At the same time, because we’ve got all of these new approaches to communication, we have the ability to bring together very large data sets.”

A desire for more inclusivity in science is also contributing to longer author lists, Medland suggests. “What I think happens if you say you can only have four people” listed on a paper, she says, is that the people who remain as authors tend to be the most senior. “I would suspect that our author lists are younger, more female and more diverse,” now that more authors can be included, she adds.

Hyperauthorship is also the result of scientists in some fields seeking answers that require not just large-scale collaboration, but also huge resources and equipment. This is particularly evident in high-energy physics, where the cost of equipment such as particle accelerators can run into billions of dollars.

No single nation can afford that price tag, says Michael Thoennessen, a nuclear physicist and editor-in-chief at the American Physical Society in College Park, Maryland. “If you want to stay at the forefront and do the most exciting physics, and do things that nobody’s ever done before, it gets more and more expensive,” he says. That means multinational collaborations, and potentially hundreds or thousands of authors.

Heidi Baumgartner, research scholar at Stanford University.

Heidi Baumgartner says quantifying authorship is the philosophical elephant in the room.Credit: Jeanina Casusi

The increasing length of author lists, however, does lead to the question of what level of contribution entitles researchers to be included. Heidi Baumgartner, executive director of the ManyBabies consortium — an international, multilaboratory collaboration in developmental psychology research — says that the project uses what’s known as the CRediT, or Contributor Roles Taxonomy, to determine what qualifies someone for authorship. This taxonomy was developed by the National Information Standards Organization in the United States; it describes 14 roles that are commonly found in a research context, ranging from conceptualization and funding acquisition to supervision and writing.

“We say you have to make a contribution to at least one of those categories, plus reviewing the manuscript, and that is what merits authorship,” says Baumgartner, who is also a social-science research scholar at Stanford University, California. This expectation is clearly laid out at the start of any project in a collaboration agreement, and it ensures that everyone who gives their time and effort is credited equally.

It’s not always so straightforward, however. Nicholas Coles, an experimental psychologist at Stanford University and director of the Psychological Science Accelerator, a global network of psychological-science labs, says that although scientific contributions have been under-recognized in authorship, there is also a well-documented history of an abuse of authorship practices resulting in over-recognition. He refers to “helicopter authorship”, where researchers “recruit large teams to collect data, and then never once acknowledge their contribution”.

This cuts to a bigger question of what sorts of contribution to scientific research warrant authorship — which Coles calls the “coin” of the scientific realm — rather than just a mention in the acknowledgements section at the end of the paper.

Baumgartner says this is the “philosophical elephant in the room of academia”: “How do you actually quantify what authorship means in terms of what contribution someone made, and then how that is used as a currency in terms of what it means for hiring or promotions?” Should the technician who keeps the lab equipment and computers running, or the nurse who collects blood samples, be listed as an author?

Thoennessen says that, in his experience, authorship is limited to scientific staff, who are the ones who are evaluated and rewarded on the basis of their scientific output.

Complex coordination

Getting a research paper published typically requires a large amount of work, even for papers with just a few authors, let alone several thousand. To reduce the risk of such a large ship foundering, ManyBabies uses a form of early peer review called Registered Reports to evaluate the viability of a research project before the data has even been collected. The initiative is being promoted by the Center for Open Science, a non-profit technology organization based in Charlottesville, Virginia, to encourage best practice in scientific methods. Journals, such as Nature, that have adopted the initiative commit to publishing a paper if the question and methodology meet the required standard (see Nature 614, 594; 2023).

Baumgartner says that using Registered Reports “gives our contributors, who are all over the world, a little assurance that sticking with this project is going to result in something, as opposed to the typical process where you run a study and you don’t really know if it’s ever going to find a home somewhere”.

Further challenges in coordinating large authorship groups arise once a study’s results are collated and analysed, and the process of writing and editing the manuscript begins. “What happens is the core team that’s worked on the meta-analysis will usually prepare the draft and that usually involves both the senior and the first and the last authors,” Medland says. “Once there’s a draft they’re happy with, it gets opened for comments.”

Both ENIGMA and ManyBabies use Google Docs to share the draft paper with all co-authors and collate their comments and edits. “Google Docs is both good and bad in that way”, says Medland, because you can end up with multiple, lengthy comment threads. Baumgartner says that the Google Docs file can get a little chaotic, “but it’s kind of fun to be in a Google Doc where a bunch of other people are also editing it and you can see where someone’s working and follow their thought process and then jump in after somebody else”.

The next challenge is ensuring that all the authors’ affiliations and conflicts of interest are correct. Coles says that one useful tool in this respect is a web app called tenzing, developed by an international team of researchers, which essentially takes a spreadsheet with author names, affiliations and contributions and “writes your authorship page”.

Dealing with so many authors is also a headache for publishers. American Physical Society journals put the onus on authors to verify that they are contributors to a paper. Its system “automatically sends an e-mail to those authors saying, ‘the paper has been submitted with your name on it — unless we hear from you, we take this as an agreement that yes, you contributed and you are co-author,’” Thoennessen says.

How authorship is eventually structured in the journal varies enormously between fields and even subfields. “We actually have what’s called a first authorship group and then we have an analyst group, and so on. And within the groups, usually it’s alphabetical order by surname,” Medland says. A multi-authorship paper covering a genome-wide association study might therefore have several lists of authors, each ordered alphabetically. By contrast, the hyperauthored Higgs boson paper has a single alphabetically ordered list.

Differences in authorship conventions can create problems when it comes to researchers getting appropriate recognition for their contributions over their career. “As an early-career researcher working in this type of field, where you have large authorship, you often run into statements like, ‘You didn’t do anything, because you’re a middle author,’” Medland says. “Authorship conventions in general are really misunderstood.” She contrasts medical research, in which the last author is considered the most senior, with psychology, where the person in that position has made the smallest contribution.

Thoennessen would like to see standards for authorship that apply across all fields, to ensure that individual researchers aren’t disadvantaged by their discipline’s conventions.

Overweight impact

When 15,000 or so researchers have contributed to a paper, it distorts the metrics commonly used to evaluate the impact and importance of a particular research project.

These papers often do represent “something that’s closer to the cutting edge of science than the single-author paper”, says Adams. But their “outrageous” citation count is also the result of the ‘two home crowds’ effect, whereby all the nations represented in one of these hyperauthored papers — and all the authors — are going to trumpet their achievements and thereby drive a higher rate of citations. This can also have the effect of distorting the metrics of scientific achievement for the nations and institutions involved, particularly if they are smaller fish in the global scientific pond.

Adams suggests that the number of authors on publications should be considered when looking at citation counts for institutions or nations, just as citation counts are normalized for year of publication and field. “We should be normalizing for the authorship, so you get a more representative citation indicator.” He says there’s even a case for leaving the largest of hyperauthored papers out of the citation process entirely. “If it’s CERN or one of the big telescopes, it’s like you’re either in the club or you’re not,” Adams says. “And then, what are we comparing you with?”

Despite the increase in the number and profile of hyperauthored papers, there’s still a lot of uncertainty about how to deal with them. Coles says that some journals are cautious about hyperauthored papers, whereas others welcome them. But as ‘big team’ science becomes more common and more respected, it might force a reckoning with what these papers mean for the scientific enterprise.

“The more authors that you’re working with, the more complicated things get,” Coles says. “That requires some pretty new thinking, from both researchers and journals, and the people who evaluate science.”



Source link

By admin

Malcare WordPress Security