Modernizing Science Publishing

Monday, January 15, 2018

The way we communicate academic science today has remained largely unchanged from when the peer-reviewed publishing system was initially developed over two centuries ago.

All new research must still go through a lengthy review process fraught with inefficiencies, and these established methods of publishing research are failing science in the modern era.

The UCSF library recently hosted “The Future of Scholarly Publishing,” inviting six panelists to provide their perspectives on how the academic publishing climate can be improved for the information age.

The speakers brought perspectives from across the academic publishing pipeline including UCSF researchers and policymakers, an editor of open access publications, and representatives from forward-thinking scientific foundations.

All six panelists agreed this process needs to change.

Profligate publishers

Fixing how journals operate would help tackle many of the greatest issues plaguing science today, such as the crisis of reproducibility and its snowballing cost.

Each journal charges exorbitant prices that make it impossible for individuals to personally gain access to the body of work they would need to stay up to date in their field. Researchers must wholly rely on institutions to supply this access.

At smaller institutions, especially those in developing nations, researchers often only have access to a limited number of publications and thus may not be able to keep up with their field at all.

It is not clear why these costs are necessary. A few of the largest publishers, which have come to dominate the industry, make billions of dollars, yet many people integral to the publication process are unpaid for their work--namely the peer reviewers.

These cost issues could be forgivable if the industry worked to keep pace with technology and drove innovation in scientific communication. The archaic peer review system is ploddingly slow in the ultra-dynamic internet age, and with more papers being published just in the last decade than in the preceding two centuries, scientists need to be as nimble as ever to keep up.

Further, by gating publications behind the judgement of established institutions, the risk is increased for ossifying existing dogma at the expense of novel ideas that may be given less credence or be overlooked entirely.

Equal treatment of information is one tenet of the scientific method. Another is consistent replicability.

Recently, attention has been drawn to the many high-profile studies that have failed to replicate when performed in other labs. In response, the scientific community has emphasized open access to raw data from studies to ensure consistency in future replications.

Many publishers, however, do not readily provide online access to the raw data behind their paywalls.

How have we reached this point? Usually, in a market-based system, competition would ensure that new, more robust methods emerge to phase out older models.

The academic publication industry, while remaining for-profit, has resisted this sort of evolution.

This is because journals are essential for the progression of science and researchers have few viable alternatives.

In fact, the trade is approaching an oligopoly – 80% of all published content is released by five publishing corporations.

The present culture in science similarly stands as a barrier to the advancement of alternative options. Many hiring practices are based upon the ‘impact factor’ of a researcher’s work; essentially the esteem in which the journals one has published in are held.

Unsurprisingly the most prestigious journals belong to established publishers. To allow more innovative options to take root, we need a drastic cultural transition in academic culture.

How can we as individual scientists do our part?

Baby steps

There are no panaceas for these issues, but all the speakers agreed scientists and their governing institutions need embrace the trend towards open access journals.

These journals provide all content to anyone free of charge and include sites such as the Public Library of Science (PLOS). Louise Page, an editor at PLOS, described her company’s pioneering efforts in open access publishing in their use of a ‘sound science’ policy.

Instead of assessing a paper’s acceptance to a journal based on its potential significance according to the peer reviewers, their philosophy is to merely accept research based on the reliability of the work, leaving the public to collectively decide its importance.

They hope this approach will counter the ‘impact factor’ problem that incentivizes researchers to preferentially publish in the most esteemed journals. “We need to make it easy to do the right thing,” Page explains.“Many researchers want to, but there is incredible momentum in the other direction to overcome.”

PLOS also vows to maintain a data availability requirement for its submissions, facilitating reproducibility.

On the other side of the coin, the consumers of published material need to change their approach.

Numerous facets of the scientific machine reinforce the ‘bad habits’ that have created the current situation – we all need to rethink how we view publishing. Institutions need to place less weight on the volume and impact factor of papers attached to an applicant’s resume.

“CVs should not list the journal title as an explicit reminder that the journal the work was published is not important, what is important is the quality of work,” said Keith Yamamoto, Vice Chancellor for Research.

Yamamoto also believes we should do away with the ‘order of authorship’ phenomenon, which creates a divisive and contentious atmosphere.

Even discounting the cultural inefficiencies, there are still many other issues plaguing publishing today. Among those the panelists raised were philosophical concerns about how data is presented.

As the ocean of existing data inexorably swells, how can any one individual sift through the clutter and find exactly what is relevant to them? Eventually we may be inundated with so much information that distinguishing what is important becomes impossible.

The hierarchical ‘impact factor’ method currently in use addresses a facet of this issue, but is again, incredibly inefficient and woefully subjective.

Jeremy Freeman, of the Chan Zuckerberg Biohub, suggested the potential for artificial intelligence (AI) to serve as a more unbiased solution. AI could learn about subjects over time, annotating and categorizing them to serve researchers with the most relevant, recent, and reliable data.

None of these solutions can work in a vacuum. Only by addressing each will the scientific community achieve its desired degrees of accountability, accuracy, and efficiency.

As young scientists, it is intimidating to consider what foregoing the most renowned publications means for our work, but as students we are in a great position to promote the culture change science desperately needs. The difficult decisions today’s budding scientists make regarding the communication of their work will set the standards for how science will be done in the future.

How does one start?

According to Dr Freeman; “just pick one thing that’s new to you and do it.” Whether it be publishing in open access journals or consciously focusing on the scientific quality of a paper itself rather than its supposed impact factor, each little thing will help drive beneficial change.