When it comes to scientists, one of the most recognized names in our world is that of Albert Einstein. Einstein, who won the Nobel Prize in Physics in 1921, is the creator of the theory of relativity which led to the prediction of amazing things such as deflection of light by gravity, gravitational lensing, black holes, gravitational waves, and the expanding universe, all of which have all been proven by many observations and experiments. Einstein ushered in a revolution in physics. He clearly was a genius, but in some aspects the way his mind worked was no different from that of any average human being.
Deflection of Light
Both Newton’s theory of gravitation and Einstein’s theory of relativity predicted that light would be deflected by a strong gravitational field, but Einstein’s theory predicted that light would be deflected by an amount roughly double of that predicted by Newton’s theory. In 1919 during a solar eclipse, it was observed that indeed light from stars close to the sun was deflected by an amount compatible with Einstein’s theory. This result, which made Einstein a worldwide sensation, has been verified with increasing accuracy many times since then. But what is less well known is that Einstein had originally made a calculation error which led him to find a deflection value no different from that predicted by Newton’s theory. This meant that the observation of the deflection of the light of stars by the sun would have disagreed with both theories. Thankfully, by the time the observation was made, Einstein had corrected his mistake, and the actual magnitude of the deflection agreed with his theory.
Einstein had figured out that, according to his theory of relativity, the strong magnetic field of a star could act as a lens and amplify the light of other distant stars behind it. But he realized the effect would be very small and fleeting, so did not deem it worthwhile publishing anything about it. However, in 1936 an amateur scientist named Rudi W. Mandl also figured out that this was one of the consequences of Einstein’s theory. He contacted Einstein who agreed the effect was indeed predicted by his theory and, after some pestering, consented to write an article about it. Einstein wrote the article acknowledging Mandl’s contribution, but stated in it that there is no chance of observing this phenomenon. At the time Einstein wrote this, he was thinking in terms of stars because the realization that there were distinct galaxies beyond our own was relatively new, and astronomers had not yet understood the real vastness of the universe. But astronomers eventually figured out that entire galaxies could act as gravitational lenses, and the first example of such a lens was discovered in 1979.
Another consequence of the theory of general relativity was the possibility of the existence of black holes, but Einstein was also dismissive of these entities, and he published an article in 1939 using his own theory to argue that black holes did not exist. In the decades after Einstein’s death in 1955, the evidence for the existence of black holes accrued until 2019 when a black hole was photographed for the first time.
Einstein’s theory of relativity included the possibility of the existence of gravitational waves, but he confided to some colleagues that he was skeptical about their existence or the possibility that they would ever be detected. He followed this by another article where he specifically examined the math behind such waves, but in this article, as was pointed out to him by other scientists, he made a calculation error. Two years later, he published another article where he corrected his previous error and finally laid down the correct mathematical framework for describing gravitational waves. However, Einstein remained skeptical about the reality of such waves.
Two decades later, in 1936, Einstein revisited the issue of gravitational waves in another article where he argued that the math really did not favor of the existence of such waves after all. He sent this article for publication in a science magazine, but the magazine sent the article to a reviewer who found a mistake in Einstein’s calculations. When Einstein redid the calculations, he found that the math did support the existence of gravitational waves after all! Still, the whole notion of the existence of gravitational waves was too outrageous for Einstein to accept. Untill the day of his death, he remained skeptical that these waves were anything but a mathematical construct, and even if they were real, he thought that they would be so faint that it would be impossible to detect them. The first gravitational waves were detected in 2015.
Expansion of the Universe
In 1917, Einstein wrote an article where he used his theory of general relativity to examine the universe. To his surprise, he found that the math indicated that the universe would expand forever. However, astronomical knowledge at the time indicated that the universe was supposed to be unchanging, so Einstein came up with a mathematical solution. He included a “cosmological constant” in the calculations, which prevented the universe from expanding forever. Other scientists challenged Einstein on this notion to the point that he conceded his original math without the constant was right, but he still doubted the reality of his conclusions. It was only after Edwin Hubble demonstrated in 1929 that the universe is indeed expanding that the prediction of Einstein’s theory were found to be right.
The above reveals how one of the greatest minds that humanity has ever seen worked. Einstein made mistakes. He was unsure about the implications of his theory. He changed his mind several times. He doubted or dismissed the existence of some of the very things that he is credited with predicting, and he sometimes even lacked the vision to imagine future realities.
This is how real science works and what real scientists are like. Science is messy. Scientists screw up. They vacillate, they change their minds, and sometimes they are unable to grasp the real consequences of the very things they propose. This IS normal, and is something that happens to everyone. But in the current poisoned climate where scientists on the “wrong” side of the culture wars are attacked for making mistakes, flip-flopping, or saying the wrong thing during an interview and so forth, one wonders if, for example, the discoveries that Einstein made would have been possible if he had been subjected to the scrutiny and slander that some scientists are subjected to nowadays.
The photograph of Albert Einstein by Orren Jack Turner obtained from the Library of Congress is in the public domain because it was published in the United States between 1923 and 1963 and the copyright was not renewed.
I recently watched a documentary about how from 1994 to 2004 a person impersonating a police officer called dozens of fast-food restaurants in over 30 states and convinced the managers of the places to strip search one of their employees. The caller would vaguely describe an alleged female employee of the restaurant and claim she was suspected of stealing money from a customer. The manager would then bring in the employee that most closely resembled the description, and then the caller would give them the option of performing a strip search there or of being taken to the police station. The caller talked calmly, fluently, and used a very authoritative voice. He had a great command of psychology, and during the grueling sessions, which often went on for hours, he was able to manipulate otherwise decent law-abiding citizens into performing and submitting to lewd unlawful acts.
The most dramatic of these events was that of Louise Ogbron, an 18-year-old employee of a Kentucky McDonalds restaurant who was strip searched by an assistant manager and her fiancé, but in her case the whole ordeal was recorded by a security camera. This video, which shocked the nation, was played in a lawsuit brought by Ogbron against McDonalds, in which she won a settlement.
Whoever the caller was, he destroyed lives. Many of the managers and associated people who conducted the strip searches were fired and shunned by their communities, and some were brought to trial and convicted. Many of the women who were strip searched suffered from post-traumatic stress disorder. A man suspected of being the caller was arrested and brought to trial, but he was acquitted by a jury.
Most people are puzzled by occurrences such as these. How can average people be manipulated by a mere phone call into carrying out or enduring these acts? Why not just refuse and hang up the phone? Why not just say no to being strip searched?
And this brings us to the famous Milgram experiment.
In a series of experiments begun in 1961, Yale University psychologist, Stanley Milgram, researched how people react to authority figures. The subjects (all men) under study in the experiment were asked to participate in what was described as a “learning task”, which investigated the effect of punishment on learning. The task involved the subject and a confederate of the experimenter who were seemingly sorted at random into being a “teacher” or a “learner”. However, the subject was always selected as the “teacher”. The teacher and the learner were then seated in separate rooms, but they could hear each other over a microphone. The learner was allegedly connected to an electrode, and the role of the teacher was to read words out loud, which the learner was supposed to memorize. The teacher would then ask the learner to repeat the words, and if the learner failed to repeat them correctly, the teacher was supposed to deliver electric shocks of an intensity that increased with each mistake.
The learner did not really receive any shocks but pretended to receive them, and he would also make mistakes on purpose. At the 75-volt level, the learner started screaming. This screaming became louder with increasing intensity of the shocks, and the learner would complain that his “heart was bothering him” as the 300-volt level was being reached. After the 300-volt level was reached, the learner went silent. As the subject (teacher) delivered shocks of increasing intensity that elicited louder screams, the experimenter would prod the subject to continue if the subject had any qualms about delivering the shocks, reassuring him that the shocks did not inflict any permanent damage and that it was necessary for the study.
The results of the experiment horrified Milgram.
Despite the learner’s increasingly louder screams, 65% of the subjects keep delivering shocks up to the maximum 450-volt level even after the learner went “silent” when the 300-volt level was reached. Many of the subjects experienced serious distress as a result of what they were asked to do, nonetheless a large number of them complied with the experimenter’s requests. Milgram surmised from his experiments that when prodded by a person whom people believe to be an authority figure (in this case the experimenter), many individuals will comply with their instructions even if they go against some of their strongest moral imperatives against harming fellow human beings. Other researchers at the time also repeated experiments similar to Milgram’s and obtained more or less similar results.
The methodology and conclusions of Milgram have been criticized, and national experimental guidelines enacted in the seventies have rendered these types of experiments unethical, so they cannot be reproduced today. But more benign forms of the experiment have been conducted with similar results. It is because of this that some people argue that the acts performed or endured by people in the strip-search phone hoax in response to what they thought were the requests of a policeman (an authority figure) can be explained in the context of Milgram’s experimental results.
The original context of the Milgram experiment was about people hurting other people when prodded by an authority figure, but I wonder if this prodding can be employed in more subtle ways. In present times, we are faced with the reality that large numbers of people have decided to forgo reason and accept misinformation, disinformation, and conspiracies such as those related to the antivaccine movement, the denial of the results of the 2020 election, or the bizarre QAnon world view. And these people have their trusted messengers whom they revere and whose utterances they accept as true. Could it be that these people view these trusted messengers as authority figures? Could it be that when these authority figures tell them to essentially disavow or ignore common sense, they somehow feel it’s OK to do it even though something inside them tells them that what they are accepting is inaccurate or wrong?
I don’t know if this is true, but in view of the results of the Milgram and other similar experiments, it is certainly a possibility to consider.
The image by Nick Youngson from Pix4free is used here under an Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) license.
One of the accusations that I often hear nowadays is that a given person arguing for something is biased. Those who promote conspiracy theories use this very epithet against anyone who dares to criticize them, and likewise, those who dismiss conspiracy theory proponents argue that they are the ones who are biased. In the popular mind, a “bias” is a negative thing to have, and the condition of being biased is synonymous with not being able to know the truth. The “bias” label is particularly contemptuous when levied against a scientist. After all, scientists are in the business of discovering the truth about the way matter and energy behave in the world around us. How can scientists discover the truth if they are biased? In popular culture a biased scientist is as blind as a bat incapable of echolocation, and whatever science they do will not reflect reality.
Let’s first address this issue by stating the obvious. We are all biased, and most of the time the biases we have are not something we have consciously chosen to have, but rather they are a consequence of the way the wiring of our brain has interacted with our particular life experience and the stimuli to which we are currently exposed. But being biased is not something that is necessarily negative. In fact, as it turns out, bias has a useful function! It allows us to simplify the complexity of our world in order to gain a measure of control over it. A bias allows us to quickly take action rather than be paralyzed by a multiplicity of seemingly equivalent alternatives. From this point of view, bias actually has a survival value and may have played a role in the successful evolution of our species.
The downside of bias is, of course, that you will blind yourself to the truth. Thus, to avoid bias, some people suggest that we should keep an open mind. However, this suggestion, although well meaning, is misguided. If you keep your mind too open, people will dump a lot of trash into it. The proper way to deal with bias is not to keep an open mind. The proper way to address bias is to strike a balance; what I call “finding the Goldilocks zone”. This entails accepting that we are all biased, that we can’t help being biased, and that, in fact, a bit of bias can be a good thing, while at the same time taking steps to counter the excess bias in ourselves through thought and action.
Now let me tell you how scientists do this.
But before I do that, let’s state that science has a healthy inbuilt bias. Science has a bias for established science. The majority of scientists believe that accepting something as true when it is false, is a greater evil than rejecting something as false when it is true. This is because established science has at least grasped some aspects of reality. If other scientists want to replace established science with a more complete description of reality, then the burden of proof is on them. This bias for established science is needed to protect science from error.
Thus, you may ask: If scientists are biased for established science, how can they discover anything new?
The answer is by using anti-bias protocols. For example, scientists will analyze or score the results of some experiments in a blind fashion. This involves the person doing the scoring or the analyses not knowing the identity of the different experimental groups. In clinical trials this involves the patients not knowing which treatments they received or even both the patients and the doctors not knowing which treatment is which (double blind protocol). Scientists will also seek to reproduce each other’s observations or experimental results. If an observation or an experimental result cannot be reproduced, then it will not gain traction. Finally, some funding agencies will devote a certain amount of their resources to funding scientists with unorthodox views to promote the debate of alternative views in a scientific field.
But how can non-scientists counter their biases?
First of all, you have to be exposed to all the facts. For example, if you listen only to conservative or liberal media, you will not learn about some issues or views. You need to listen to the other side. However, this does not mean that you have to force yourself to listen the conspiracy-laden drivel coming out of far right or far left media. Instead try to locate a moderate news source or a news source that leans slightly towards the opposite side of the political spectrum that you favor. The idea is not to open your mind to what these news outlets have to say, but rather to become aware of the issues they are covering, why they find them important, and what their arguments are. Avoid insulating yourself and living in an “echo chamber”.
Second, try to identify a person from the other side who holds views different from yours and sit down to have a talk one day. But when I say “person”, I don’t mean a nutjob who spews far-out nonsense and will engage you in a shouting match. Choose a reasonable person. There are quite a number of these out there. A rule of thumb to choose a reasonable person is to look for someone who, despite disagreements, accepts that “the other side” has made valuable contributions and is necessary to the debate. Nothing beats discussing issues with an individual who disagrees with you but also respects you.
And finally, learn to identify the characteristics of bias. Sweeping generalizations, innuendo, exaggerations, hearsay, judging the many by the actions of the few, the creation of strawmen, ignoring weaknesses in arguments, not seeing the forest for the trees, and attacking the person instead of the argument are all things that signal an emotional and irrational approach to issues that is indicative of a person who is biased and therefore unable to correctly grasp reality. Ask yourself if you are exhibiting these traits when you engage in arguments. Ask others who will tell you the truth if you are exhibiting these traits.
So to wrap it up, a little bias is acceptable and even healthy, but too much bias can mess up your perception of reality. When it comes to bias, try to find the Goldilocks zone.
The image “Goldilocks tastes the porridge” from the New York Public Library is used here under a Creative Commons CC0 1.0 Universal Public Domain Dedication ("CCO 1.0 Dedication") license.
2020 Election Redux: My Opinion is as Valid as Yours! When Do We Declare Someone to Be Unreasonable?Read Now
I have debated many conspiracy theorists on Twitter. In the majority of the cases the arguments they put forth are a mishmash of innuendo, hearsay, selective quoting of the evidence, exaggeration, misinformation, and ignorance. After some back and forth where I rebut their claims with evidence and facts, we reach a point where these individuals argue that in the end, it’s my opinion against theirs, and that I have my trusted sources and they have theirs. The implication is, of course, that both are equivalent. But when it comes to certain issues, nothing could be further from the truth. Take for example the notion that the 2020 election was fraudulent, and that Mr. Trump really won by a landslide.
Although this may seem like a political issue that I should not be discussing in a science blog, I have already explained that the questions “Who won the election?” and “Was the election fraudulent?” are both scientific questions because they can be answered with evidence. Thus, in my exchanges with 2020 election conspiracists I present the facts:
Out of 64 cases that Trump and his allies brought to federal courts, he lost 63. Conspiracists claim that most of these cases were dismissed on technical or procedural grounds without considering the merits of the cases, but this is not true. Only 20 of these cases were dismissed before hearing the merits, whereas 30 cases were dismissed after considering the merits of the case, and 14 were withdrawn by Trump and his backers before the hearing of the merits. In several of the cases the courts, which also included Trump-appointed judges, issued stinging rebukes of the unsupported claims of election fraud. A group of prominent conservatives has systematically reviewed the claims brought about by the Trump campaign and their allies in each of these lawsuits and found them to be unsupported by the evidence.
The Department of Justice led by Trump’s Attorney General, William Barr, found no evidence of election fraud. Neither did the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security and other government agencies. Multiple audits and recounts of the results in swing states affirmed that Mr. Trump lost. A Michigan Republican state senator, Ed McBroom, led an 8 month investigation into the legitimacy of the Michigan election and found no evidence of fraud. A GOP-backed review of the Arizona election found that indeed Biden had won. Official examination of voter fraud claims in Georgia did not reveal any fraud of a magnitude to overturn the election. The Trump campaign employed a research firm to review voting data from six swing states, but the firm did not find anything that would have overturned the result of the 2020 election. Trump was told he lost by some of his inner circle of advisers, but he ignored them.
There were no major problems with drop boxes for mailed ballots. The expansion of postal voting did not lead to widespread fraud. Mail-in-ballots are secure and widely used in the United States even before the 2020 election. There is no evidence that Biden received more than 8 million excess votes in the 2020 election. A scientific study analyzed statistical claims of alleged systematic voter fraud in the 2020 election, and found them to be unconvincing. The movie “2000 Mules” which posits that people aligned with Democrats were paid to illegally collect and drop ballot boxes in several swing states has been conclusively debunked. The type of affidavits claiming voter fraud presented by Trump and his allies to the courts were mostly hearsay, guesses, speculation, or ignorance of election procedures, and could not be taken as proof of voter fraud.
Trump’s lawyer, Rudy Giuliani, has been suspended from practicing law in New York for making false claims about the 2020 election. Another Trump layer, Jenna Ellis, was censured in Colorado for making false claims about the 2020 election. Trump’s lawyer, Sidney Powell, who is being sued by a voting machine company, Dominion, for claiming that the company stole the election from Trump, is arguing that “No Reasonable Person’ Would Believe Her Dominion Conspiracy Theories Were ‘Statements of Fact’.”
The Dominion lawsuit has also uncovered that the talking heads and executives of the Fox News channel did not believe the election fraud claims of Trump and his allies, but nevertheless they kept giving them airtime to avoid losing viewers. Thus, all the people who relied on Fox News as a trusted information outlet for commentary on the election fraud issue were willfully deceived by individuals who did not believe that what they were communicating to them was true. But there is still a majority of Republicans who think that the election was stolen and that there is solid evidence for it.
So far the evidence indicating that there was no fraud in the 2020 election of a scale that would alter its outcome is truly formidable. Nevertheless, election conspiracy advocates dismiss the investigations carried out by election officials, elected representatives, watchdog groups, the media, and government agencies as biased or indecisive, and they dismiss the court case results as not being based on merits. They also label any Republicans involved (many of whom voted for Trump) “RINOS” (Republicans In Name Only), while claiming that others are not to be trusted because they are part of the “Deep State”, part of the “fake news” media, etc.
There is a criterion to decide whether someone is acting reasonably or not. This involves asking them, “What evidence would change your mind?” If the person cannot answer this question and commit to changing their mind if the evidence is produced, then we can assume that this person is being unreasonable. The opinion of an unreasonable person is not equivalent to that of a reasonable one, and this is not a trivial point. When unreasonable persons act and/or sway others to act based on falsehoods, this can lead to dire consequences such as the storming of the Capitol on January 6th 2021 by a mob enraged over an election that was never stolen.
Being reasonable matters.
Image by El Sun from Pixabay is free to use for commercial and non-commercial purposes.
Although my website is called ratio scientiae, which is knowledge of the physical world, in today’s post we are going to indulge in some crossover with ratio sapientiae, which is knowledge of the divinity. And what better place to start something that deals with the divinity than with the universe.
As I have written in a previous essay, the universe is insanely big. The furthest object that humanity has launched into space is the voyager-1 probe, which has taken 45 years travelling at 35,000 miles per hour to be 22 light hours away from Earth. By comparison the nearest star, Alpha Centauri, is 4.24 light years away from Earth. Our galaxy, The Milky Way, is 105,000 light years wide. The nearest galaxy to ours, the Andromeda Galaxy, is 2.5 million light years away. Andromeda, our galaxy, and others are part of a group of galaxies called the Virgo supercluster which is 110 million light years wide. The Virgo supercluster is part of an even larger structure of superclusters of galaxies called the Pisces-Cetus Supercluster Complex which stretches 1 billion light years across space, and is one of tens of thousands of such structures in the universe. The visible universe extends more than 13 billion light years away from Earth in all directions, and it contains more than 7 trillion galaxies, 30 billion trillion stars, and as many planets.
So my question is: Biblically speaking, what is the point of this immense humongously ginormous vastness?
The Bible doesn’t say much about the universe beyond the understanding of people living one of two thousand years ago. Stars in the Bible were nothing more than points of light (nowhere is it mentioned they are suns), and their purpose, along with that of the sun and the moon, is to give light upon the Earth and separate the day/light from night/darkness, and for signs and for seasons, and for days and years (Genesis 1:14-18). But this Earth-centric view of the cosmos can only apply to the universe visible with the unaided eye, which comprises about 10,000 distinct stars, so it begs the question as to the purpose of the rest of the universe which was invisible to the ancients. Additionally, considering that just in our galaxy there are at least millions of Earth-like planets orbiting sun-like stars, another issue about which the Bible doesn’t say much is whether there are intelligent beings other than us in the universe.
Of course, because we are speculating about the motivations of the divinity, there can be many answers to this question. For example, you can argue that indeed is only us in the universe and that God created all that immensity that we have uncovered with telescopes to make us feel humble, and there is no way to refute this argument short of being contacted by extraterrestrials. However, as far as I’m concerned, all those billions upon billions of galaxies, and even more stars and planets seem to me like a waste of creation effort if the sole purpose of creation is us. It also seems to me statistically unlikely that we are alone in the universe.
You would expect that now that humanity has attained advanced knowledge about the universe, God would provide us with an update more attuned to our level of understanding of the cosmos. However, according to Roman Catholic and most protestant faiths, all of God’s revelation is contained in the Bible and no new revelation will be forthcoming.
This is not to say that all Christians accept this premise. A notable example is the Church of Jesus Christ of Latter-Day Saints, or LDS Church (The Mormons), which states that revelation is still occurring. Whereas the Bible says nothing about the new world (the Americas), the Mormons believe that people from Jerusalem came to the Americas and that Jesus visited them after he rose from the grave and taught them the Christian Gospel. Then this information was revealed to their prophet, Joseph Smith, who wrote it into what was to become the Book of Mormon in 1830. A remarkable revelation within the Mormon canon is the doctrine of eternal progression where humans have the potential to become gods and engage in acts of creation giving rise to new populated worlds. And these acts of creation are ongoing. Some of these created worlds have passed away, others such as ours are still extant, while many others are in the process of being birthed.
Thus the LDS Church seems to offer an explanation for the size of the universe and whether there are other intelligent beings out there, although other Christians disagree. Additionally, the Mormon cosmology is not completely compatible with current scientific ideas about the universe. Nevertheless, the Mormon claims seem to me at least a step in the right direction towards providing that much needed update, because the Bible, in view of our present knowledge of the universe, comes across as a highly parochial account of the cosmos, which has not even begun to play catch up with other accounts such as that of the Mormon faith. But I think that this will change.
Now I will proceed to make my prediction/prophesy for the ages.
Unless humanity is destroyed say by a collision of Earth with a large asteroid or some other calamity (in which case doctrinarian Christians will be vindicated), humanity will sooner or later begin its trek towards the stars. At first it will be baby steps such as bases and then settlements on the Moon and Mars and perhaps even some of the moons of Jupiter or other planets such as Saturn. Then, if we can develop the technology to cover the vast distances of space, even if it is with generational ships or some form of suspended animation of the crews of these ships, we may actually begin travelling to the nearest stars.
So, lo and behold! As what I have outlined above unfolds, I predict that in the future within the mainstream Christian religions someone will claim that they have received new revelation from God, a gospel for the space age if you will, which may even include an apparition by Jesus himself. And in this new revelation the questions I have raised in this essay will be answered and a new plan will be revealed for humanity to be fruitful, multiply, and expand into the cosmos where they will meet other intelligent beings. This claim will be attacked by mainstream Christian churches, but it will spread like wildfire and become the leading Christian religion of the space age.
When will this happen? I don’t know. It may be in 10, 100, 1000 or more years, but if it happens, let future generations know that people in this century read about it here first!
Image by R. Halfpaap from flickr is used here under an Attribution-NoDerivatives 4.0 International (CC BY-ND 4.0) license and has not been modified from the original.
DNA in Basic and Applied Science: From the Building Blocks of Life to Heredity and Catching Bad GuysRead Now
I have been watching true crime documentaries involving cold cases. Cold cases are crimes that went “cold” due to lack of progress in the investigation. What is remarkable about some of these cold cases is that many of them were solved decades after the crime thanks to DNA evidence. DNA technology along with national databases of DNA profiles of convicted offenders and arrestees, as well as coordination among law enforcement agencies, has brought about a revolution in crimefighting and greatly increased the odds of catching the criminals.
But how did this come about? The answer: is curiosity. Scientists reasoned that biological entities must be made of basic parts or building blocks that are put together to create the whole. Thus the original research into living things was a quest to find, describe, and understand the nature of these building blocks, how they fit together, and how they work.
In 1869 the Swiss physician Friedrich Miescher was working in the city of Tübingen in Germany with the cells present in the pus that he painstakingly isolated from surgical bandages. While preparing a solution from these cells he noticed a material that precipitated out of the solution when he acidified it. Miescher would go on to demonstrate that this material, which was not made of proteins or lipids, was present in the nucleus of the cells and he called it nuclein (which we now know to be DNA). Miescher later worked in obtaining purer extracts of nuclein and analyzing its composition, and he also discovered that sperm were a particularly good source of nuclein. His work with sperm led him to become interested in heredity, but he never considered that nuclein could be solely responsible for it.
Scientists kept on gaining a more profound understanding of the makeup of nuclein and its properties. For example, in 1881 the German biochemist Albrecht Kossel found that nuclein contained four nitrogen containing bases, adenine (A), cytosine (C), guanine (G), and thymine (T). In that same time period, it was becoming evident that the nucleus of cells played an important role in cell division and multiplication and perhaps heredity. Based on multiple lines of evidence, in 1882 the German biologist Walther Flemming suggested that the chromosomes, which are structures that can be found inside the nucleus, contained nuclein, and in 1902 Theodor Boveri and Walter Sutton independently postulated that chromosomes were involved in heredity.
The early 20th century saw the rise of the field of genetics and triggered the search for the physical nature of the unit of inheritance: the gene. Thomas Morgan proposed in 1911 that genes were present in chromosomes. Although some scientists began to entertain the notion that nuclein, now renamed “nucleic acid” could be responsible for heredity, many still favored proteins as they did not understand how the chemical makeup of nucleic acid could be responsible for it. This issue was finally solved when Oswald Avery, Colin MacLeod, and Maclyn McCarty working with bacteria in 1944, and Alfred Hershey and Martha Chase working with viruses in 1952 demonstrated that the molecule responsible for heredity was the nucleic acid which is now formally called DNA (deoxyribonucleic acid).
A series of discoveries then followed not only regarding the nature of DNA but how to work with it. James Watson and Francis Crick in 1953 proposed their famous “double helix” model for the structure of DNA and suggested a mechanism by which it could carry the genetic information. In 1956 Arthur Kornberg discovered the enzyme that replicates DNA (DNA polymerase). From 1961 to 1966 the genetic code was cracked by several scientists including Robert Holley, Gobind Khorana, Heinrich Matthaei, and Marshall Nirenberg. In 1977 Frederick Sanger, Allan Maxam, and Walter Gilbert developed methods to sequence DNA, which were vastly improved when Kary Mullis in 1985 developed the process of the polymerase chain reaction (PCR). The PCR process allowed the production of large number of copies of DNA from small samples.
As the mechanisms of heredity at the molecular level became clearer, scientists began asking other questions such as the role of genes in disease and in other important biological processes which ushered a revolution in medicine and the biological sciences that is still ongoing. But at the same time, other groups of scientists begun asking different types of questions. They wondered if DNA could be used to develop practical applications.
In 1984 the British scientist, Alec Jeffreys, succeeded in creating the first genetic fingerprint of a human being, and he began applying his genetic fingerprinting techniques to settle paternity disputes and immigration cases. He also began adapting his techniques for use in criminal cases; something which he called genetic profiling for forensic use. In 1986 the police contacted him to solve the case of two women who had been raped and murdered. Using DNA evidence, Jeffreys not only proved that a man arrested for the crime was innocent, but he also succeeded in finding the criminal from samples of individuals that the police had obtained from the area.
From there on, the use of DNA profiling greatly increased in the forensics field and was improved to the point that today even a small droplet or stain from a biological fluid or a single hair with a follicle left at the scene of a crime can be used to produce enough DNA to aid in establishing the innocence or culpability of a subject. And the future may hold even more amazing forensic applications of the process. Today’s DNA profiling methods are effective if a match can be found in a database or in a sample from a suspect. However, even in the absence of a match, DNA profiling has the potential to reveal key aspects about the originator of the sample being profiled such as eye, hair, and skin color, and probable physical appearance.
And all this started because some scientists were curious about what living things are made of.
DNA image by Виталий Смолыгин is in the public domain.
A while ago while on Twitter, I saw people were tweeting about individuals who hunted giraffes and posted pictures of themselves posing next to their kills. Because I thought the whole discussion was one-sided, I responded by posting a link to the explanation that a woman hunter, Tess Halley, provided as to why and how she hunted a giraffe so everyone would be aware of the other side of the argument. I followed that by posting a link to an interview with her.
The effect this had was like spraying gasoline on a fire.
My followers on Twitter called Halley: vile, sad, disgusting, despicable, heartless, a coward, a monster, scum, a sociopath, immoral, and a POS. Her killing of the giraffe was labelled egregious, sickening, outrageous, and appalling. She was branded a person without a moral compass who destroys the balance of the Earth and nature, and who deserves to burn in hell. A few people criticized hunters in general while the majority just criticized trophy hunting in particular. Others only chastised Halley for posting the picture or at least considered it an aggravating factor.
I know several hunters personally, and they are all decent individuals, so I took exception to the comments my Twitter followers made. In this post I’m going to recap some of my arguments while analyzing the apparent reasons everyone was so outraged over Halley killing a giraffe and my thoughts about it.
Was it because she killed a sentient animal?
A sentient animal is one which has the ability to perceive or feel things. So this clearly goes beyond giraffes, and covers, for example, farm animals such as cows, pigs, and chickens. In the United States more than 8 billion chickens, 100 million pigs, and 30 million cattle are slaughtered each year, and the slaughter of these animals is a traumatic process which stresses the animals before they die. I suspect that the majority of the people who displayed indignation at Halley killing the giraffe also eat meat, so I have to point out that by buying meat, you are financing those who kill sentient animals (cows, pigs, and chickens) to feed you. By this reasoning, in terms of killing a sentient animal, people who eat meat are no worse than her. Indeed, a vegetarian wrote that whereas trophy hunters kill 70,000 wild animals each year worldwide, meat eaters finance the killing of 70 billion farm animals. He also argued that whereas wild animals enjoy several years of freedom before they are killed, farm animals lead short, restricted, miserable lives before they are slaughtered, and those who eat meat support all this.
Was it because she did not have to kill the animal for food? Was it because she did it for sport?
People could argue that killing or paying others to kill animals specifically raised for food is justified (although vegetarians would disagree), but Halley killed a wild animal for sport, and that’s not acceptable.
This is a value judgement. However, I have to point out that Halley was not a poacher. She obtained the permission of the authorities of the preserve where this giraffe lived. The giraffe belonged to a managed herd, and in these herds animals have to be killed (culled) occasionally for the overall good of the herd. Giraffe populations, while still low, are increasing. The giraffe was also not left out in the field to rot, all of its body was used. Although Halley claims that she is foremost a hunter, she views her kill as fitting within the framework of a conservation effort. There are groups of hunters that have spearheaded efforts to protect wildlife and their habitat through organizations such as Ducks Unlimited.
It must also be mentioned that in the United States today the majority of people have no need to kill wildlife for food in order to survive. Therefore, most hunting is hunting for sport. From this vantage point, the killing of that giraffe by Halley was no different from the killing of deer, elk, moose, boars, etc. There are 15 million hunting licenses issued in the United States each year, and it is estimated that close to 5% of the population of the United States engages in hunting. If you include people that fish at least once a year (yes, fishing is a form of hunting that kills a sentient animal), that covers 55 million Americans. Should all these people in the United States receive the moral condemnation that Halley received?
There are multiple reasons for hunting, but the hunters I know hunt for the experience, the challenge, the bonding (if they are hunting with others), and the proximity to nature. Many hunters will tell you that killing your own food beats buying it at the supermarket. And the vast majority of hunters are mindful of the need for conservation. They buy their hunting or fishing permits and follow the laws.
Was it because she posted a picture of herself smiling next to the giraffe?
This struck a nerve with many people who argued that if you are going to kill the giraffe, so be it, but at least don’t post a picture of yourself smiling next to it on social media.
This is another value judgement, but it must be pointed out that the activity of hunting is as old as humanity, and so is the pride hunters take in their kill and their desire to document it. In humanity’s past this took place in the form of stories, paintings, and trophies (tusks, horns, etc.), and with technological advances this has also included photographs and videos. The most visible example of this practice is photos of fishermen posing with the fish that they have caught. Thus social media is the next logical extension of this activity.
So why was it?
I suspect the real reason why people were so outraged is the same reason why they would be outraged if someone killed a cute puppy, but wouldn’t bat an eye if someone killed a rat, even though both are sentient animals. Some animals have just gained a cultural foothold in the empathic human consciousness. Large majestic animals such as giraffes, elephants, or lions have an iconic appeal to the contemporary human psyche that other animals just don’t have, and their killing triggers strong emotional reactions even if it is carried out within a legal conservation-oriented framework.
I am not a vegetarian, and I am not a hunter, although I have caught and eaten fish, and I use small animals for research. I rationalize our use of animals in terms of humans being the dominant predator of the planet. Although I like the outdoors and often go on short hikes, my regular life is far removed from nature. From this vantage point I believe that conservation-minded hunters are closer to nature than me or anyone with my lifestyle. Finally, I also think that viewing nature through the prism of human morals is going against the very essence of what nature is, and I have written posts about this in my blog. But in the end, societies decide what is acceptable or not. People can always lobby their elected representatives to ban the importation of hunting trophies of the animals they care about, or people can pressure social media companies to ban the posting of photos of hunted animals as part of their terms of service. I think that these two initiatives would be more effective than short-lived outbursts of social media outrage.
So those are my thoughts on this issue. What do you think?
I do not own the rights to the photograph of Tess Halley posing with the giraffe she killed in South Africa. This photo has been widely circulated in social media and is used here under the doctrine of Fair Use.
Antivaxxers are people who deny the need for or the efficacy of vaccines and their role in controlling some of the most dreadful diseases in the history of humanity. Not only this, but antivaxxers also claim that vaccines have huge side effects that actually harm more people than they benefit, and they have been particularly vocal about the COVID-19 vaccines. All this is, of course, not true. The COVID-19 vaccines have saved millions of lives by decreasing the proportion of hospitalizations and deaths among the vaccinated to a much greater extent compared to the unvaccinated. Antivaxxers have also spread misinformation and lies about the COVID-19 vaccines that have been repeatedly debunked over and over and over. Nevertheless, they ignore this while expressing outrage at pro-vaccine people, at best calling them “sheep” (sheeple), or at worst claiming that they are being manipulated by or are part of an immoral and unethical alliance of the government, pharmaceutical companies, and other organizations bent on profit and societal control.
So what should be my approach to dealing with antivaxxers? I see two alternatives: the inflammatory approach and the conciliatory approach.
Considering the high effectiveness of the COVID vaccines at decreasing the hospitalizations and deaths from COVID-19, considering that antivaxxers have been waging an aggressive campaign of spreading misinformation about the COVID-19 vaccines on social media, and considering that online misinformation is linked to COVID vaccination hesitancy and refusal, it is not surprising that many people were harmed or killed by the misinformation spread by antivaxxers. During the peak of the Delta variant the daily consequences of spreading misinformation have been estimated at 300 deaths, 1,200 hospitalizations, and 20,000 COVID-19 cases with a cost of 50 to 300 million dollars. I am appalled and outraged at how many lives antivaxxers have damaged.
So my question is: should antivaxxers pay for their crimes?
This is not a far-fetched concept. Alex Jones, the talking head from Infowars, spread misinformation and disinformation about the shooting at Sandy Hook Elementary School where 20 children and 6 adults were killed. He said that the shooting was a false flag operation carried out by anti-gun groups, that no one died, and that the children were actors. As a result of this, the families of the murdered children experienced years of harassment by the followers of Alex Jones. Thankfully, he was brought to court and tried and found guilty, and now he has to pay the Sandy Hook families millions of dollars. Alex Jones tried several defenses including his right to free speech, but the judges didn’t buy it. He spread falsehoods and this hurt people. That was the bottom line. So, if anything, the case against antivaxxers should be even more clear cut, because many people who followed their ideas were harmed or died.
Although in the case of Alex Jones the Sandy Hook families sued him for slander, a person or the family of a person harmed by antivaxxers could sue them for fraud. They would have to prove that the antivaxxer spread the misinformation while knowing that it was false. They would have to prove that the person who was harmed relied on the antivaxxer in their decision to forgo vaccination. And they would have to prove that there was economic loss (hospital bills, lost wages, funeral expenses, etc.). There are, of course, additional subtleties that have to be taken into account depending on the specific antivaxxer entity or person being sued, but this is a possible approach.
Following this rationale, I think that at the very least, any antivaxxer that fulfils the conditions outlined above should be sued for the medical and funeral expenses incurred by the people (or their relatives) who followed their advice in good faith and were harmed or died.
The above is the inflammatory approach. It’s the sort of thing you say/write to scandalize and infuriate people and increase their engagement, drive traffic to your blog, website, or podcast, and grow your brand. This approach makes tempers flare and generates a lot of heat and ill will as invectives fly back and forth and hatred is spewed everywhere.
But there is another way to do this. It’s probably not as successful for getting engagement, but it may be more useful to society, civil discourse, and the psychological well-being of the public.
Every time two groups of people have strong disagreements on some things, the recommended course of action is to find areas of agreement. Antivaxxers are concerned about the side effects of COVID-19 vaccines. The evidence we have indicates that the frequency of serious side effects as a result of these vaccines is very low, which makes the vaccines much safer than having the disease. However, even if rare, when hundreds of millions are vaccinated, the number of net cases start to accumulate. And some of these cases are severe enough that exceptionally susceptible people may end up impaired and saddled with huge debts due to their medical bills. Shouldn’t these people be compensated?
I would venture that most people, whether pro or anti-vaccine, would agree with this. Unfortunately, this is not what is happening. There is a federal program known as the Vaccine Injury Compensation Program (VICP) that is available to people who have been injured by the routine vaccines that are administered in the United States. This program in its lifetime has awarded $4.7 billion in compensation for vaccine injuries to cover 36% of the claims it has received. But this program does not cover the COVID-19 vaccines. Compensation for harm from the COVID-19 vaccines is handled by a program called the Countermeasures Injury Compensation Program (CICP). The CICP program was designed to handle compensation for people injured by treatment for rare events such as an Anthrax attack, but this program is now handling compensation claims for a treatment dispensed to hundreds of millions of Americans. The CICP program is underfunded, understaffed, and overwhelmed with claims, which it is resolving at a glacial pace, and so far congress has not done anything about this.
So here is the chance for antivaxxers to make a difference and actually achieve something positive. If they stop their attacks on vaccines and pro-vaccine people and focus on lobbying congress to, for example, expand and fund the CICP program or move the COVID-19 claimants to the VICP program, that would be a major achievement that would help people affected by the side effects of vaccines. At the same time many pro-vaccine individuals and organizations that advocate for the rights of patients could join ranks with them to work together towards a common goal and actually benefit people.
The alternative, of course, is to keep engaging in the usual cycle of claims, counterclaims, insults, counterinsults, and endless vitriol, which may help increase engagement but which does not accomplish anything meaningful to benefit society.
So my question to antivaxxers is, what is it going to be: inflammatory or conciliatory?
Image from pixabay by Gerd Altmann is free for commercial use and was modified from the original.
The end of the world. How many times have we read books or seen movies about it? From alien invasions, killer asteroids, and problems with the Earth’s magnetic field, to the good old fashioned biblical end of times, the end of the world has been a recurring theme through the existence of humanity. And from what some people have written about it, it will certainly not be a pretty sight. If the end of the world happened in an instant with no warning, that would be one thing, but many visualizations of the end of the world give humanity several weeks or months of awareness of their impending doom before it actually happens. And this is where things get ugly.
You would think that a sentient, thinking, civilized species such as our own would spend its last days engaged in spiritual, philosophical, or family-oriented activities. For example, people could await Armageddon praying in their churches and seeking repentance for every bad deed they have done, or meeting with their friends and loved ones to remember good times and eat, drink, sing, dance, and tell stories before oblivion. Alas, this is not what many of those writing about the end of times think will happen. Several authors envision scenes of panic and chaos with rampaging mobs bent on looting and pillaging. Destruction, fires, lynchings, and inebriated individuals seeking payback for actual or imagined transgressions by persons or by society against them. The poor at war with the rich, the minorities at war with the majorities, one race at war with another, etc. Every single point of friction that exists in our society explodes unleashing pent up anger and hatred.
Hopefully these writers are wrong and most of humanity will face eternity with grace and composure, but even if their apocalyptic scenario is right, that’s not what really bothers me the most about the end of the world.
Let me explain.
I don’t know if you remember, but many predicted that the world was going to end back in 2012. Why? Many claimed that the Mayan calendar was ending on that year and this signaled the end of the world. As it turns out, this was not true. The Mayan calendar was ending a cycle, but after that another cycle was scheduled to begin. But the doomsday crowd ignored this and swiftly moved to discuss not IF but HOW the world would end. Many claimed that a rogue planet called Nibiru, claimed to be originally discovered by the Sumerians, or a Planet X, or a large asteroid would collide with Earth, even though no such planets or asteroid were visible anywhere near Earth. Others claimed that an alignment of the planets would destroy the Earth, but not only was such planetary alignment not taking place in 2012, but also these alignments have happened before and they have no effect on our planet. Still others claimed that the Earth would reverse its rotation leading to worldwide e mayhem, and they invoked the fact that the magnetic polarity of the planet has changed throughout its history. This change is the reversal of north and south magnetic points, but not only does this change not cause any harm to life on Earth, but it would certainly not change the direction of the Earth’s rotation.
All this nonsense proliferated on many websites, was swiftly spread by social media, and led to the publication of many books, and even one major movie was made based on the premise aptly entitled “2012”. Of course, the date of the apocalypse came and went, and nothing happened. But this was irrelevant to end-of-the-world proponents who started their search for the next big revelation. Prophesying the end of the world is great business.
However, what bothers me is this. What if the doomsdayers had been right? I figure that in the hours or days before the end, they would have huge crowds listening to their every utterance, and they would have the power to command many people to do whatever they wanted. These doomsdayers would be rockstars! But the problem is that such veneration would be totally unwarranted.
Human beings have been predicting the end of the world since time immemorial. Every year there are dozens of individuals all over the world who predict when the world is going to end. If eventually the world does end, these people would be right but just because of chance. If you throw ten coins, what is the chance of getting ten heads? It’s unlikely, but if you throw 10 coins enough times, you will eventually get this result. Much in the same way, if people predict the end of the world continuously, they will eventually be right when it does happen. Before accepting they were right, the logical thing to do is to look at their predictive record. How many predictions have they made before? How many of these predictions were right? How detailed was their end-of-the-word prediction? Did they get these details right? I would suggest that to accept that these individuals really got it right because there is something special about them, they would have to clear a pretty high bar.
Call me naïve, but I would expect that a simple truth such as “we’re all going to die” should not get in the way of thinking straight. But good luck telling that to an irrational mob of terrorized people. For all I know, scientists like me would be the first to be hanged. But be it as it may dear reader, if that fateful occasion does come along within my lifetime, I hope you check my blog and social media channels because I will be here presenting the evidence and facts, and defending reason against unfounded claims right until the last minute!
On the meantime, however, I will settle for debunking the next irrational claim of an impending apocalypse when it comes along.
The Grumpy Cat meme was adapted from the internet. The Grumpy Cat image belongs to the company “Grumpy Cat Limited”. The image is used here in good faith in a non-commercial way under the doctrine of fair use in the same way it was used by the millions of people who made Grumpy Cat an internet sensation.
I had just graduated with a PhD and had returned to do science in the developing country that I grew up in. I joined a laboratory that performed basic science research but that also had a service for screening of medical conditions called “inborn errors of metabolism”. These conditions occur when a child is born with a genetic defect in one or more enzymes, which are the proteins responsible for carrying out metabolic conversions of one chemical compound into another. When these enzymes malfunction as a result of a genetic defect, the chemical compounds that they act upon cannot be degraded and accumulate in the body at very high concentrations causing toxicity to many organ systems including the brain. Because my degree was in nutrition with a major in biochemistry, I was considered the “expert” in metabolism, and I was supposed to do consulting work for the service. Although I had never worked specifically in the area of inborn errors of metabolism, I was newly graduated and cocky enough to think that my general training in metabolism would be enough to allow me to make a contribution.
In developed countries, newborn children are systematically screened for these genetic defects, because early treatment can ameliorate the pathology. However, in developing countries which have scant resources, many people are reluctant to perform these screenings. Inborn errors of metabolism are rare conditions, and you have to screen thousands of children to find one that has a problem. The director of the laboratory where I worked, and founder of the service, faced an uphill battle to try to convince hospital administrators in the country to join the service and send blood samples from newborn children for us to analyze.
One day, one of the hospitals, which our director was trying to convince to join the service, contacted us with the case of a girl who kept having seizures despite being treated with virtually all anti-seizure medications that they had at their disposal. They were at their wit’s end and suspected that the girl could have an inborn error of metabolism. A blood sample from the girl was sent to our service and after analysis yielded the result that some chemical compounds were elevated in her blood. The director of the service contacted me and asked for my “expert” opinion. I armed myself with naiveté, picked up the textbook that I had used in my biochemistry classes, and looked up the compounds which were elevated in the girl’s blood. When I checked the enzymes required for the metabolism of these compounds, I noticed that a few of them required vitamin C as a cofactor. In other words, the enzymes required vitamin-C to function.
When an enzyme which requires a cofactor has its activity reduced due to a genetic defect, a very common strategy is to administer large doses of the cofactor to boost any residual activity of the enzyme. The diagnosis and treatment now seemed obvious to me. I thus stated that the girl probably had a genetic defect of one of these enzymes, and that therefore we should give her large doses of vitamin-C to maximize any leftover enzyme activity. My suggestion was relayed to her doctors in the hospital which proceeded to pump vitamin-C into the girl’s body.
A few days later the director of the service contacted me regarding my diagnosis and treatment suggestion. The girl had stopped having seizures and recovered! And not only that, the hospital decided to join the service and send blood samples from newborn children for us to analyze! My boss was impressed. My coworkers were impressed. I, the “expert”, had made the right call! Not only did my suggestion heal a girl, but it was instrumental in convincing hospital administrators to devote resources to working with us! Because of what I did, more children would be screened, and more children with genetic defects would be identified for early treatment, which would help them. And all it had required was me checking my book! It had been sooo easy. Veni, vidi, vici (I came; I saw; I conquered). For about a week I was on cloud 9, full of myself, walking on sunshine: and don’t it feel good!
And then it all crashed and burned.
When the real experts were contacted (people who had actually dealt with impairments of the enzymes that I thought were affected) they told us that the elevations in the concentration of the compounds we detected were not large enough to indicate a genetic impairment in the enzymes. Rather these experts stated that vitaminc-C is part of the body’s defense mechanisms against toxins (oops, I had not considered that). What probably happened was that the girl was malnourished (and therefore vitamin-C deficient) and she had been exposed to a toxin that her body was not able to clear and which caused the seizures. When we gave her the vitamin-C, her body was able to degrade the toxin, and she got better.
I was incredibly lucky. I had arrived at the right treatment for the wrong reasons. So I had to eat a very large slice of humble pie. Thankfully, when notified about the matter, the hospital decided not to pull out, and they continued working with the service. However, I learned a harsh lesson. Even though I had a PhD, I had only a “textbook knowledge” of the field with no practical experience. I was not a real expert, and I had failed to understand that fact.
If you are familiar with my blog, by now you have probably figured out what I’m getting at. Today there are individuals with no formal general training in science or practical expertise in any specific field, who are reading the scientific literature and interpreting it to support opinions and ideas which they disseminate on social media, blogs, and podcasts to thousands of people. I had been trained in science. I had a PhD. But because I was not an expert in a specific field, I screwed up. Why do these individuals feel they have the qualifications to do what they are doing? And why do others follow their every utterance as if it were gospel while ignoring what the real experts have to say?
The experts are called experts for a reason, and it is folly for people without training to try to replace them. Luck may not always be on your side.
Four leaf clover image from OpenClipArt by Firkin is in the public domain and has been modified.