Do’s and Don’ts Regarding How To Assess Scientific Studies in the Age of the Internet and Social MediaRead Now
A positive aspect of the advent of the internet is that scientific studies can be made public as soon as they are ready to be published. However, these studies are highly technical publications that are intended for scientists to study and analyze. Thus, one negative effect of greater accessibility to the scientific literature is that individuals without the education and technical knowledge necessary to evaluate the studies can now gain access to them. As a result, these individuals may disseminate in their blogs, podcasts, social media, and other outlets erroneous claims about these studies either because they misunderstood them, or because they may have an agenda directed at favoring certain interpretations of the studies, even if these interpretations are not supported by the data.
I have lost track of the number of times I have seen someone on Twitter making claims about some issue by citing the latest published scientific article. Invariably the purpose of the individuals making these claims is not to discover or debate the truth, but rather to support their political or social agendas. I have tried to explain that truth in science is not established by one or even a few studies, even if they are published in peer-reviewed journals. Scientists have to debate the merits and flaws of each other’s studies, and this is a process that will take time. During this process scientists may make claims that they may later recant when more evidence becomes available, or a study that was heralded as a good study may fall in disfavor if it is realized that certain variables that turn out to be important were not controlled. But when scientists do these normal things that are part of the scientific process, they are accused of flip-flopping or selling out to special interests.
The above process is amplified by various types of media which reach millions of people and contribute to create confusion and suspicion when people see narratives change. I saw this happen with hydroxychloroquine. A study would come out indicating hydroxychloroquine (HCQ) had an effect against COVID-19, and all the HCQ proponents would brag about how the issue was settled and HCQ worked. Then another study would come out showing that HCQ did not have an effect, and all the HCQ critics would claim HCQ did not work. In the middle of the storm, certain responsible scientists or organizations would comment about the studies pointing out flaws or strengths, and they would be denounced by the pro or con side. Eventually enough studies accumulated, and they showed not only that HCQ does not work against COVID-19, but also why it does not work. However, by then HCQ had lost its appeal as a political issue.
I have seen this happening again with the drug ivermectin. A study came out of Brazil using a population of 88,012 subjects where ivermectin brought about a reduction of 92% in COVID-19 mortality rate. The pro-ivermectin crowd declared victory, bragged about how they had been right all along, and pointed out that the withholding of ivermectin had led to many preventable deaths from COVID-19. The truth, however, was very different. This was an observational study where the allocation of patients to treatments was not randomized, which can lead to serious biases in the data. And while a sample size of 88,012 subjects sounds impressive, the actual comparisons were performed on much smaller subsets. For example, the 92% result came from comparing 283 ivermectin users to 283 non-users. Additional problems involved the exclusion of a large number of subjects and the non-control of ivermectin use. Finally, there is no way that an effect of such a large magnitude (92%) would not have been detected when performing better designed and controlled trials, but that has not been the case.
As I have pointed out before, the politization of science creates a caustic environment where the work of scientists is mischaracterized or attacked by unscrupulous individuals, and this makes the process of science much harder than it already is.
To avoid all the problems mentioned above, I have put together a list of do’s and don’ts regarding how to assess scientific studies in the age of the internet and social media.
1) Do listen to what scientists have to say about the studies. They are experts in their field, and an expert is called that for a reason. They have studied many years and trained to do what they are doing. Do not assume you know more than the experts. Do not merely quote a study in your blog or social media to defend a position. Rather, do report on the debate among scientists regarding the strengths and weaknesses of the study and identify unanswered questions.
2) Do give scientists the time to evaluate and debate the studies and to reassess the studies as more information becomes available. Do not attack scientists for changing their minds.
3) To make up your mind, do wait for several studies to accumulate and for the majority of scientists to reach a consensus regarding the studies. However, this consensus will not be arrived at based on the total number of studies, but rather on their quality. One study of good quality can be more meaningful than dozens of low-quality studies, and the community of scientists (not a single scientist) is the ultimate arbiter regarding the quality of the studies.
4) Do not judge scientists or the results from their studies by their affiliations to companies or other organizations. The studies have to be judged on their merits. Do not make offhand claims that conflicts of interest have corrupted the science if you don’t have any evidence for it. Hearsay, innuendo, and ignorance are not proof of anything.
5) Success in science is measured by the ability of scientists to convince their peers. Scientists who have been unable to convince their peers and who bypass the normal scientific process to take their case to “the people” are a huge red flag. Do not blindly trust the renegade scientists who claim they are ignored by their peers. These scientists are often ignored because their studies are deficient and their ideas are unconvincing.
6) Do not defend and promote a scientific claim just because a celebrity or politician whom you trust or like has endorsed it. Endorsement by a non-scientist of a scientific claim without any hard evidence is irrelevant to the scientific debate. Science is not politics.
If everyone follows these guidelines, we can hopefully restore a measure of rationality to the scientific discourse among the public.
Image by Tumisu from pixabay is free for commercial use and was modified.
I recently published an article in a scientific journal. Within a few days of publication of the article, the invitations started coming. Scientists I had never heard of before wrote to tell me about how they read my article, thought it was very good, and wanted me to publish in the journals for which they serve as editors. One of these persons who claimed to be an “assitant editor” stated he was “impressed deeply by the novelty, advance, and potential extensive use” of my research and was “deeply honored” to also extend me an invitation to join their team as an editor or a reviewer. I did not recognize the names of any of these journals, and when I checked the links, I found they were all exclusively open access (online) journals. What had just happened to me is something that most scientists who publish nowadays face. I had been targeted by predatory journals.
Predatory journals are journals that masquerade as legitimate scientific journals but charge authors for publication without providing any editorial and publishing services. One of the cornerstones of scientific publishing is peer-review. This means that articles containing research results are submitted to scientific journals where other scientists with comparable knowledge of the field (a peer) review them. These reviewers make recommendations to the authors and/or the journal’s editor regarding revisions or publication of the research. A so called “predatory journal” makes a mockery of the peer review system by indiscriminately publishing any submission it receives with minimal review.
In addition, unlike mainstream journals, predatory journals often hide their publishing fees. The editors of predatory journals target scientist by sending flattery-laden invitations to publish with them without spelling out that there will be a fee involved. The scientist goes through all the work of putting together an article and sending it for review. Once the article has been accepted, an e-mail is sent to the scientist informing them that there is a publication fee involved and the scientist is not allowed to withdraw their article until the fee is payed.
Predatory journals have been made possible by the advent of the internet. Most predatory journals are open access journals (online journals) that have no print version. There are legitimate open access journals such as PlosOne that have a rigorous editorial review process, and their editorial team includes scientists of renown. Predatory journals, on the other hand, have editorial teams either made up of low caliber scientists that often review articles outside their area of expertise, or bona fide scientists that the journal has duped into joining as editors, or even scientists that have been included as editors without their consent! Many predatory journals sport names and websites that are similar to those of mainstream journals, and they report fraudulent metrics with regards to how often the articles published in the journal are cited to make that journal look good.
To illustrate the problem that predatory publishing can cause, consider the sting operation carried out by the mainstream scientific journal Science. Several members of its team put together a spoof article with glaring scientific errors that would be picked by an average competent reviewer, and they sent them to a few hundred open access journals that had been identified as predatory. They found that 62% of the journals accepted the spoof article for publication. Of those journals that conducted any discernible kind of review of the article (most often limited to details not involving the science), 70% accepted it. Here consider not only that the acceptance rate for legitimate articles sent to bona fide scientific journals is between 20 and 30%, but that this particular spoof article described research designed to be easily identifiable as bad science. What this indicates is that predatory journals are highly likely to contain shoddy science that can mislead scientists searching for clues to solve their research problems, leading to wasted time and resources.
By 2014, about 400,000 scientific articles had been published in 8,000 journals regarded by some metrics as predatory. Today the number of such journals has increased to more than 10,000. If predatory journals were readily identifiable, this would not be as much of a problem, but for the average researcher with limited time on their hands, the process of weeding out the good journals from the bad can prove daunting. The scientist Jeffrey Beal elaborated and maintained a public list of predatory journals for a few years, but due to harassment from the publishers of the journals he was forced to take his list down.
Many people believe that scientists that publish in predatory journals are usually inexperienced young scientists who are deceived into publishing in these journals. After all, what possible value can be obtained from accumulating publications in unknown, low-quality journals? One would expect that at the time the researcher’s credentials are evaluated, this would be considered a big negative, right? As it turns out, the problem is much worse than previously thought. I have published a post about several ways by which scientists game the system to advance their career. Well, add publishing in predatory journals to the list! In what is turning out to be not quite predation but a twisted interdependence, many scientists from developing countries, and from institutions with few resources where the metric for academic promotion relies more on the total number of publications, are flocking to predatory journals to beef up their publication numbers.
So what is there to be done? The issues concerning predatory journals as they relate to the criteria for faculty promotions will have to be addressed at the institutional level. The practices of predatory journals of misrepresenting themselves to scientists can be addressed at the judicial level. However, at the individual level there are several guidelines that researchers can follow to avoid not only publishing in predatory journals, but also taking seriously the science contained in them. I myself, for example, view with suspicion anything published in a journal not included in reputable bibliographic databases such as MEDLINE. And, of course, if you get a message in your e-mail describing what a wonderful first class researcher you are and inviting you to publish in a journal you’ve never heard of before and to join its editorial board, leave your ego aside and ignore it!
Inage by Sarahmirk is used under an Attribution-Share Alike 4.0 International license.
A long time ago I had a conversation with a colleague regarding another scientist. This other scientist was a seemingly successful individual who had published more than a hundred articles in peer-reviewed journals. My colleague stated that he thought that this individual was not a real researcher because he had merely “gamed the system”.
Somewhat puzzled I inquired as to why he thought this individual, whom everyone regarded as a successful scientist, was not a researcher. He responded that this individual’s research was devoid of any guiding set of questions. It was disjointed and chaotic. Many of this individual’s publications resembled a mass production conveyor belt set up in collaboration with other labs with the aim of churning out articles that addressed low risk questions. Additionally, my colleague argued that in many publications this individual had been included as an author solely because of access granted to other researchers to technology or tissue samples that they would not have otherwise had. He concluded by stating that this individual had made a career by drilling where drilling is easy and mastering the art of serial scientific publishing. My colleague again added, “This person is not a real researcher, he has just figured out how to game the system.”
Now, many scientists are opinionated individuals with strong personalities, and more often than not they don’t have the nicest things to say about other scientists with whom they have butted heads. I don’t know if my colleague was right, but most scientists will tell you that they know someone who fits the unflattering description that my colleague made of this other scientist. There are indeed scientists that use several methods to game the system, and these methods range from those that don’t quite follow the “spirit” of what science should be to those which are flagrantly criminal.
Here I list some of these methods:
1) In science there is a huge pressure to publish. The axiom “publish or perish” embodies this conception of results-oriented science. The upside of this approach is that you can gauge the productivity of researchers by their number of publications. This approach allows accountability and rational planning in the allocation of resources based on performance. The downside of this approach is that it encourages researchers to think about publications rather than science. Thus there will invariably be individuals who will excel at publishing rather than at answering meaningful scientific questions. These individuals have mastered the art of breaking up scientific problems into many little parts each of which will generate sufficient data to produce at least one publication (what has been dubbed the LPU or least publishable unit), and they team up with other like-minded labs to produce a steady stream of publications where they are coauthors in each other’s articles.
2) Another metric employed to evaluate scientists is citations. The concept is very straightforward. If what you publish is of interest to other scientists, they will cite your published articles in their publications. This metric allows evaluators to go beyond the mere volume of published articles in evaluating scientists. Thus scientists who publish a lot of inconsequential articles can be singled out using this metric and separated from those whose publications generate a lot of excitement within the scientific community. However, a way around this approach has been found in the form of the citation tit for tat (i.e. I will cite your articles if you cite mine). The extent to which this practice occurs is difficult to gauge, but it ranges from something that may happen among a few labs in an uncoordinated way to full-fledged “citations cartels” whose individuals blatantly engage in boosting each other’s citations.
3) In each field of science there are a series of top journals in which all scientists in the field wish to publish. Publication in a top journal means more exposure, more prestige, and more citations. In fact the quality of the journals in which scientists publish is also a metric that is taken into account when evaluating them. But what is a scientist to do if the work they are doing is not good enough to be published in a top journal? As it turns out, you can buy the authorship! In certain areas of the world there are black markets where scientists can purchase a slot in a publication as a coauthor for a certain amount of money.
4) In today’s fragmented scientific landscape where practically a lifetime of study and research is required to become an expert even in relatively small scientific fields, it is virtually impossible for a journal to have enough reviewers to cover the full breadth of topics represented by the articles that are submitted for publication. To remedy this, many journals allow authors to recommend reviewers for their articles. This practice has led to abuses ranging from authors recommending their friends to review their papers, to outright fake positive reviews.
And last, but definitely not least, we come to the most infamous practice of them all to game the system.
5) In the idealized notion of science, scientists formulate hypotheses, perform experiments, and learn from the outcome of these experiments whether it supports their hypotheses or not. However, the cold hard truth is that if none of your hypotheses prove to be true and this goes on for too long, your career may be in serious trouble. Look at it from the point of view of the agencies that fund scientific research. Why support someone who keeps barking up the wrong tree? It is then that some individuals in this bind are tempted to engage in fraud by faking their results. This fakery can range from selective publishing, where positive data is reported and negative data is ignored, to massive and systematic forging of data on dozens of publications.
To be fair, the use of some of the above practices by scientists (with the exception of the most extreme forms of gaming the system) is not necessarily negative. Competent scientists may wish to tackle worthy scientific questions that may take years to solve with potentially little to show for it during the process. However, these individuals realize that if they try to answer these questions head-on they will not be favored by the current evaluation system. Thus they divide their research into low risk “bread and butter” projects designed to meet pesky publication requirements, and those projects where they address the meaningful but risky questions they really want to tackle. These scientists may also figure out that if they collaborate with and cite the right people or recommend friendly reviewers, this will provide them with the stability they need to devote themselves to the important issues.
Many scientists have been known to engage in the more benign forms of gaming the system, but whereas most use these procedures to fulfil evaluation requirements while they address important scientific questions, some use these practices merely to survive and further their careers. Of course, the ultimate evaluation of the achievements of a researcher will come not from citation metrics or number of publications, but rather from the actual real world impact of their research.
This is a metric that can’t be gamed.
Figure by Selena N. B. H. used here under an Attribution 2.0 Generic (CC BY 2.0) license.