I recently published an article in a scientific journal. Within a few days of publication of the article, the invitations started coming. Scientists I had never heard of before wrote to tell me about how they read my article, thought it was very good, and wanted me to publish in the journals for which they serve as editors. One of these persons who claimed to be an “assitant editor” stated he was “impressed deeply by the novelty, advance, and potential extensive use” of my research and was “deeply honored” to also extend me an invitation to join their team as an editor or a reviewer. I did not recognize the names of any of these journals, and when I checked the links, I found they were all exclusively open access (online) journals. What had just happened to me is something that most scientists who publish nowadays face. I had been targeted by predatory journals.
Predatory journals are journals that masquerade as legitimate scientific journals but charge authors for publication without providing any editorial and publishing services. One of the cornerstones of scientific publishing is peer-review. This means that articles containing research results are submitted to scientific journals where other scientists with comparable knowledge of the field (a peer) review them. These reviewers make recommendations to the authors and/or the journal’s editor regarding revisions or publication of the research. A so called “predatory journal” makes a mockery of the peer review system by indiscriminately publishing any submission it receives with minimal review.
In addition, unlike mainstream journals, predatory journals often hide their publishing fees. The editors of predatory journals target scientist by sending flattery-laden invitations to publish with them without spelling out that there will be a fee involved. The scientist goes through all the work of putting together an article and sending it for review. Once the article has been accepted, an e-mail is sent to the scientist informing them that there is a publication fee involved and the scientist is not allowed to withdraw their article until the fee is payed.
Predatory journals have been made possible by the advent of the internet. Most predatory journals are open access journals (online journals) that have no print version. There are legitimate open access journals such as PlosOne that have a rigorous editorial review process, and their editorial team includes scientists of renown. Predatory journals, on the other hand, have editorial teams either made up of low caliber scientists that often review articles outside their area of expertise, or bona fide scientists that the journal has duped into joining as editors, or even scientists that have been included as editors without their consent! Many predatory journals sport names and websites that are similar to those of mainstream journals, and they report fraudulent metrics with regards to how often the articles published in the journal are cited to make that journal look good.
To illustrate the problem that predatory publishing can cause, consider the sting operation carried out by the mainstream scientific journal Science. Several members of its team put together a spoof article with glaring scientific errors that would be picked by an average competent reviewer, and they sent them to a few hundred open access journals that had been identified as predatory. They found that 62% of the journals accepted the spoof article for publication. Of those journals that conducted any discernible kind of review of the article (most often limited to details not involving the science), 70% accepted it. Here consider not only that the acceptance rate for legitimate articles sent to bona fide scientific journals is between 20 and 30%, but that this particular spoof article described research designed to be easily identifiable as bad science. What this indicates is that predatory journals are highly likely to contain shoddy science that can mislead scientists searching for clues to solve their research problems, leading to wasted time and resources.
By 2014, about 400,000 scientific articles had been published in 8,000 journals regarded by some metrics as predatory. Today the number of such journals has increased to more than 10,000. If predatory journals were readily identifiable, this would not be as much of a problem, but for the average researcher with limited time on their hands, the process of weeding out the good journals from the bad can prove daunting. The scientist Jeffrey Beal elaborated and maintained a public list of predatory journals for a few years, but due to harassment from the publishers of the journals he was forced to take his list down.
Many people believe that scientists that publish in predatory journals are usually inexperienced young scientists who are deceived into publishing in these journals. After all, what possible value can be obtained from accumulating publications in unknown, low-quality journals? One would expect that at the time the researcher’s credentials are evaluated, this would be considered a big negative, right? As it turns out, the problem is much worse than previously thought. I have published a post about several ways by which scientists game the system to advance their career. Well, add publishing in predatory journals to the list! In what is turning out to be not quite predation but a twisted interdependence, many scientists from developing countries, and from institutions with few resources where the metric for academic promotion relies more on the total number of publications, are flocking to predatory journals to beef up their publication numbers.
So what is there to be done? The issues concerning predatory journals as they relate to the criteria for faculty promotions will have to be addressed at the institutional level. The practices of predatory journals of misrepresenting themselves to scientists can be addressed at the judicial level. However, at the individual level there are several guidelines that researchers can follow to avoid not only publishing in predatory journals, but also taking seriously the science contained in them. I myself, for example, view with suspicion anything published in a journal not included in reputable bibliographic databases such as MEDLINE. And, of course, if you get a message in your e-mail describing what a wonderful first class researcher you are and inviting you to publish in a journal you’ve never heard of before and to join its editorial board, leave your ego aside and ignore it!
Inage by Sarahmirk is used under an Attribution-Share Alike 4.0 International license.
A long time ago I had a conversation with a colleague regarding another scientist. This other scientist was a seemingly successful individual who had published more than a hundred articles in peer-reviewed journals. My colleague stated that he thought that this individual was not a real researcher because he had merely “gamed the system”.
Somewhat puzzled I inquired as to why he thought this individual, whom everyone regarded as a successful scientist, was not a researcher. He responded that this individual’s research was devoid of any guiding set of questions. It was disjointed and chaotic. Many of this individual’s publications resembled a mass production conveyor belt set up in collaboration with other labs with the aim of churning out articles that addressed low risk questions. Additionally, my colleague argued that in many publications this individual had been included as an author solely because of access granted to other researchers to technology or tissue samples that they would not have otherwise had. He concluded by stating that this individual had made a career by drilling where drilling is easy and mastering the art of serial scientific publishing. My colleague again added, “This person is not a real researcher, he has just figured out how to game the system.”
Now, many scientists are opinionated individuals with strong personalities, and more often than not they don’t have the nicest things to say about other scientists with whom they have butted heads. I don’t know if my colleague was right, but most scientists will tell you that they know someone who fits the unflattering description that my colleague made of this other scientist. There are indeed scientists that use several methods to game the system, and these methods range from those that don’t quite follow the “spirit” of what science should be to those which are flagrantly criminal.
Here I list some of these methods:
1) In science there is a huge pressure to publish. The axiom “publish or perish” embodies this conception of results-oriented science. The upside of this approach is that you can gauge the productivity of researchers by their number of publications. This approach allows accountability and rational planning in the allocation of resources based on performance. The downside of this approach is that it encourages researchers to think about publications rather than science. Thus there will invariably be individuals who will excel at publishing rather than at answering meaningful scientific questions. These individuals have mastered the art of breaking up scientific problems into many little parts each of which will generate sufficient data to produce at least one publication (what has been dubbed the LPU or least publishable unit), and they team up with other like-minded labs to produce a steady stream of publications where they are coauthors in each other’s articles.
2) Another metric employed to evaluate scientists is citations. The concept is very straightforward. If what you publish is of interest to other scientists, they will cite your published articles in their publications. This metric allows evaluators to go beyond the mere volume of published articles in evaluating scientists. Thus scientists who publish a lot of inconsequential articles can be singled out using this metric and separated from those whose publications generate a lot of excitement within the scientific community. However, a way around this approach has been found in the form of the citation tit for tat (i.e. I will cite your articles if you cite mine). The extent to which this practice occurs is difficult to gauge, but it ranges from something that may happen among a few labs in an uncoordinated way to full-fledged “citations cartels” whose individuals blatantly engage in boosting each other’s citations.
3) In each field of science there are a series of top journals in which all scientists in the field wish to publish. Publication in a top journal means more exposure, more prestige, and more citations. In fact the quality of the journals in which scientists publish is also a metric that is taken into account when evaluating them. But what is a scientist to do if the work they are doing is not good enough to be published in a top journal? As it turns out, you can buy the authorship! In certain areas of the world there are black markets where scientists can purchase a slot in a publication as a coauthor for a certain amount of money.
4) In today’s fragmented scientific landscape where practically a lifetime of study and research is required to become an expert even in relatively small scientific fields, it is virtually impossible for a journal to have enough reviewers to cover the full breadth of topics represented by the articles that are submitted for publication. To remedy this, many journals allow authors to recommend reviewers for their articles. This practice has led to abuses ranging from authors recommending their friends to review their papers, to outright fake positive reviews.
And last, but definitely not least, we come to the most infamous practice of them all to game the system.
5) In the idealized notion of science, scientists formulate hypotheses, perform experiments, and learn from the outcome of these experiments whether it supports their hypotheses or not. However, the cold hard truth is that if none of your hypotheses prove to be true and this goes on for too long, your career may be in serious trouble. Look at it from the point of view of the agencies that fund scientific research. Why support someone who keeps barking up the wrong tree? It is then that some individuals in this bind are tempted to engage in fraud by faking their results. This fakery can range from selective publishing, where positive data is reported and negative data is ignored, to massive and systematic forging of data on dozens of publications.
To be fair, the use of some of the above practices by scientists (with the exception of the most extreme forms of gaming the system) is not necessarily negative. Competent scientists may wish to tackle worthy scientific questions that may take years to solve with potentially little to show for it during the process. However, these individuals realize that if they try to answer these questions head-on they will not be favored by the current evaluation system. Thus they divide their research into low risk “bread and butter” projects designed to meet pesky publication requirements, and those projects where they address the meaningful but risky questions they really want to tackle. These scientists may also figure out that if they collaborate with and cite the right people or recommend friendly reviewers, this will provide them with the stability they need to devote themselves to the important issues.
Many scientists have been known to engage in the more benign forms of gaming the system, but whereas most use these procedures to fulfil evaluation requirements while they address important scientific questions, some use these practices merely to survive and further their careers. Of course, the ultimate evaluation of the achievements of a researcher will come not from citation metrics or number of publications, but rather from the actual real world impact of their research.
This is a metric that can’t be gamed.
Figure by Selena N. B. H. used here under an Attribution 2.0 Generic (CC BY 2.0) license.