A long time ago I had a conversation with a colleague regarding another scientist. This other scientist was a seemingly successful individual who had published more than a hundred articles in peer-reviewed journals. My colleague stated that he thought that this individual was not a real researcher because he had merely “gamed the system”.
Somewhat puzzled I inquired as to why he thought this individual, whom everyone regarded as a successful scientist, was not a researcher. He responded that this individual’s research was devoid of any guiding set of questions. It was disjointed and chaotic. Many of this individual’s publications resembled a mass production conveyor belt set up in collaboration with other labs with the aim of churning out articles that addressed low risk questions. Additionally, my colleague argued that in many publications this individual had been included as an author solely because of access granted to other researchers to technology or tissue samples that they would not have otherwise had. He concluded by stating that this individual had made a career by drilling where drilling is easy and mastering the art of serial scientific publishing. My colleague again added, “This person is not a real researcher, he has just figured out how to game the system.”
Now, many scientists are opinionated individuals with strong personalities, and more often than not they don’t have the nicest things to say about other scientists with whom they have butted heads. I don’t know if my colleague was right, but most scientists will tell you that they know someone who fits the unflattering description that my colleague made of this other scientist. There are indeed scientists that use several methods to game the system, and these methods range from those that don’t quite follow the “spirit” of what science should be to those which are flagrantly criminal.
Here I list some of these methods:
1) In science there is a huge pressure to publish. The axiom “publish or perish” embodies this conception of results-oriented science. The upside of this approach is that you can gauge the productivity of researchers by their number of publications. This approach allows accountability and rational planning in the allocation of resources based on performance. The downside of this approach is that it encourages researchers to think about publications rather than science. Thus there will invariably be individuals who will excel at publishing rather than at answering meaningful scientific questions. These individuals have mastered the art of breaking up scientific problems into many little parts each of which will generate sufficient data to produce at least one publication (what has been dubbed the LPU or least publishable unit), and they team up with other like-minded labs to produce a steady stream of publications where they are coauthors in each other’s articles.
2) Another metric employed to evaluate scientists is citations. The concept is very straightforward. If what you publish is of interest to other scientists, they will cite your published articles in their publications. This metric allows evaluators to go beyond the mere volume of published articles in evaluating scientists. Thus scientists who publish a lot of inconsequential articles can be singled out using this metric and separated from those whose publications generate a lot of excitement within the scientific community. However, a way around this approach has been found in the form of the citation tit for tat (i.e. I will cite your articles if you cite mine). The extent to which this practice occurs is difficult to gauge, but it ranges from something that may happen among a few labs in an uncoordinated way to full-fledged “citations cartels” whose individuals blatantly engage in boosting each other’s citations.
3) In each field of science there are a series of top journals in which all scientists in the field wish to publish. Publication in a top journal means more exposure, more prestige, and more citations. In fact the quality of the journals in which scientists publish is also a metric that is taken into account when evaluating them. But what is a scientist to do if the work they are doing is not good enough to be published in a top journal? As it turns out, you can buy the authorship! In certain areas of the world there are black markets where scientists can purchase a slot in a publication as a coauthor for a certain amount of money.
4) In today’s fragmented scientific landscape where practically a lifetime of study and research is required to become an expert even in relatively small scientific fields, it is virtually impossible for a journal to have enough reviewers to cover the full breadth of topics represented by the articles that are submitted for publication. To remedy this, many journals allow authors to recommend reviewers for their articles. This practice has led to abuses ranging from authors recommending their friends to review their papers, to outright fake positive reviews.
And last, but definitely not least, we come to the most infamous practice of them all to game the system.
5) In the idealized notion of science, scientists formulate hypotheses, perform experiments, and learn from the outcome of these experiments whether it supports their hypotheses or not. However, the cold hard truth is that if none of your hypotheses prove to be true and this goes on for too long, your career may be in serious trouble. Look at it from the point of view of the agencies that fund scientific research. Why support someone who keeps barking up the wrong tree? It is then that some individuals in this bind are tempted to engage in fraud by faking their results. This fakery can range from selective publishing, where positive data is reported and negative data is ignored, to massive and systematic forging of data on dozens of publications.
To be fair, the use of some of the above practices by scientists (with the exception of the most extreme forms of gaming the system) is not necessarily negative. Competent scientists may wish to tackle worthy scientific questions that may take years to solve with potentially little to show for it during the process. However, these individuals realize that if they try to answer these questions head-on they will not be favored by the current evaluation system. Thus they divide their research into low risk “bread and butter” projects designed to meet pesky publication requirements, and those projects where they address the meaningful but risky questions they really want to tackle. These scientists may also figure out that if they collaborate with and cite the right people or recommend friendly reviewers, this will provide them with the stability they need to devote themselves to the important issues.
Many scientists have been known to engage in the more benign forms of gaming the system, but whereas most use these procedures to fulfil evaluation requirements while they address important scientific questions, some use these practices merely to survive and further their careers. Of course, the ultimate evaluation of the achievements of a researcher will come not from citation metrics or number of publications, but rather from the actual real world impact of their research.
This is a metric that can’t be gamed.
Figure by Selena N. B. H. used here under an Attribution 2.0 Generic (CC BY 2.0) license.