Opinion: Pay-to-Play Publishing

Posted on at


Online scientific journals are sacrificing the quality of research articles to make a buck.

 

The Internet has enabled the dissemination of information at lightning speed. This information revolution has created tremendous business opportunities for online publishers, but not all of them maintain proper quality-control mechanisms to ensure that only good information is being shared. Instead, many publishers aim simply to make money by whatever means possible, with no regard for the ramifications for society at large.

When greedy publishers set up shop online, the primary goal is to publish as much as possible, often at the cost of quality. In this vein, many publishers start numerous online journals focused on overlapping disciplines—to increase their total number of published papers—and hire young business managers who do not have any experience in either science or publishing. In some cases, online publishers even forgo peer review, while still presenting themselves as scientific journals—a scam designed to take advantage of scientists who simply want to share their research. In the most egregious cases, counterfeit publications use the same name as legit journals that are not published online (for example, Archives des Sciences and Wulfenia).

Despite increasing awareness of such “predatory” publishers, these sham journals continue to multiply. According to a list curated by librarian Jeffrey Beall of the University of Colorado Denver’s Auraria Library (see “Predatory Publishing,” The Scientist, August 2012), the number of possible predatory journals has grown from 18 in 2011 to 860 in 2015.

To make matters worse, even legit scientific publications are beginning to let the quality of their articles slip. BioMed Central, a publishing house for more than 275 online journals, recently alerted its editors to another problem in scientific publishing: fake reviewers. The publisher identified about 50 articles that had not been reviewed properly; some of those were reviewed by their own authors, who had set up phony email accounts and suggested those aliases as reviewers when submitting the manuscript. Last March, the publisher retracted 43 of those papers. And in August, science publisher Springer retracted 64 papers after finding the peer-review process had been manipulated. Under pressure from their employers to generate more revenue by upping the percentage of papers that are accepted for publication, journal editors may let such things slip.

As the founder and the former editor in chief of AIDS Research and Therapy, I’ve had firsthand experience with publishers structuring their business to make more revenue, often to the detriment of their products. When publishers start journals with overlapping domains, for example, they do not coordinate communication between editors of sister journals to make sure that the titles will not compete with each other for articles. In combination with the pressure to publish more studies, this could promote the publication of marginal or even questionable articles.

Moreover, publishers with multiple overlapping journals and journals with very narrow specialties increase the demands on the time and efforts of willing reviewers. A recent Nature News blog suggested that every nine years the number of science publications doubles—and this tally was just for print publications. To date, there is no good estimate of the growth of online scientific papers. With a limited pool of qualified experts to review a given article, and the fact that reviewers are generally not compensated for their time and effort, journal editors are often unable to find enough reviewers to keep up with the increased publication rate.

To improve the situation and increase the trust in scientific community, the pressure to publish must be reduced. The value that both funders and tenure committees put on publication record drives scientists to publish marginal advances, which predatory publishers are all too happy to post online. Funding and promotion decisions should not be based on the number of publications, but on the quality of those publications and a researcher’s long-term productivity and mentorship.

And that’s just the start. We need additional mechanisms, such as Beall’s list of predatory publishers, to alert scientists to fake journals and fake articles. Along these lines, the Directory of Open Access Journals and the Open Access Scholarly Publishers Association should be more vigilant in listing journals in their databases. In addition, perverse incentives, such as financial perks and promotions for publications, have to disappear; the price for online publication must be controlled; and a mechanism must be put in place to honor and reward hard-working reviewers. (See “Opinion: Reviewing Reviewers,” The Scientist, July 19, 2013.)

Curiously, while many articles have been written about how the situation has gone from bad to worse, hardly anybody talks of the primary influence of money in bringing this chaos to science publishing. Without appropriate measures in place to segregate the business from the science, the credibility of the research community will continue to suffer.

Kailash Gupta recently retired from the National Institutes of Health as a program officer after 12 years on the job. He was also the founder and editor in chief of AIDS Research and Therapy



About the author

160