8/18/2022 Can something be False but Not Fake? Taking a Look at the Images from the James Webb Space Telescope, Geiger Counters, Your Brain, and the Amazing Realm of PerceptionRead NowMany of us are were awed by the release of the first pictures taken with the James Webb Space Telescope (JWST). The telescope’s crystal-clear images identified previously unseen galaxies, which formed just a few hundred million years after the Big Bang, giving a us a closer glimpse of the early universe. It also revealed many new instances of gravitational lensing, a phenomenon predicted by Einstein, where a strong gravitational field bends light. And it identified many stars in the process of formation enveloped in clouds of dust and gas exposed to titanic forces unleashed by galaxy collisions or the explosion of older stars. However, not everyone was thrilled. A group of skeptics started arguing that the photos were fake, and the fact that the first photo of the JWST was unveiled by President Biden in a ceremony at the White House provided the politization element. Someone also pointed out that the name of the galaxy cluster featured in the first image, SMACS 0723 (which stands for Southern MAssive Cluster Survey), reads “SCAM” when spelled backwards. Conspiracy theories arose claiming that the fake images are a cover up and the telescope is really a spy satellite or a weapon of some sort. It also didn’t help that a scientist as a joke posted an image of a slice of a sausage and claimed that it was an image of a nearby star taken by the JWST. Additional confusion was caused by the information that the colors in the images were not the original colors (they were false colors!), and that the images underwent a lot of computer processing (manipulation, eh? nudge, nudge; wink, wink) before being released to the public. So there you have it. A presidential photo op, hidden word messages, false colors, computer generated images, fake science, and conspiracy theories. It’s déjà vu all over again! Shades of QAnon, the 2020 election lie, the 911 conspiracy, and the moon landing hoax. All this nonsense is of course, fiction. However, as it has been stated many times by many people, truth is stranger than fiction. There is a process called “transduction” where a signal of one type gets converted to a signal of another type. A classic example of this is a Geiger counter, where the signals produced by radioactivity (ionizing radiation) are converted (transduced) into sound by the sensors and electronics of the device. Radioactivity obviously does not make a sound. The sound is a false representation of the radioactivity, but this does not make the Geiger counter readings fake. This is because the sounds produced by the Geiger counter are correlated to the intensity and timing of the radioactive emissions. Thus, with the Geiger counter we can detect a phenomenon (radioactivity) that otherwise we cannot perceive with our senses. The same thing happens with the images from the JWST. The images we have seen were taken with the telescope’s infrared cameras. But the problem is that much in the same way that we can’t perceive radioactivity, we also can’t see light in the infrared range. If we were to look at an unprocessed photo generated from the data from the telescope, we would just see faint darks and greys. The infrared photos have been converted (transduced) to the visible range much in the same way that radioactivity is converted into sound by a Geiger counter. Colors have been assigned to these images in order for us to see them. So yes, the images we see are in false colors and have been processed by computers, but they are correlated to the realities that the JWST is imaging. Thus they are not fake. And in case anyone remains skeptical about this, just consider that YOU do this all the time. Say what? Yes, you, or I should probably clarify, your brain, transduces signals all the time. In other words, your brain constantly changes one type of signal into another. Let me explain. The light we see, the sound we hear, the odors we smell, the flavors we taste, and the things we touch are not sensed directly by our brains. They are sensed by receptors at the level of our eyes, ears, nose, tongue, and skin. These receptors then proceed to convert (transduce) these light, sound, odor, flavor, and touch signals into electrical signals. These electrical signals then travel to the brain through specialized structures in neurons called axons, and millions of these axons make up the cables that we call nerves. So when we are exposed to light, sound, odors, flavors, and things we touch, what the brain perceives is shown in the figure below. Those spikes in the image represent the electrical signals travelling down the axon of a neuron in time (the horizontal axis). This is the reality that the brain perceives. Not light, sound, odors, flavors, or the things we touch, but rather millions of these electrical signals arriving to it every second. Now, do these signals make any sense to you? Of course not! The signals have to be transduced. The brain does something similar to what the Geiger counter does or what scientists working with the JWST do. The brain processes the electrical signals coming from our eyes, ears nose, tongue, and skin and generates the sensations of sight, sound, smell, taste, and touch. These sensations are as false as the sound made by the Geiger counter or the color representations in the images of the JWST, but they are not fake in the sense that they are correlated to reality. So, for example, we cannot see the wavelength of the light that impacts our eyes, but our brain associates the wavelength of the light with colors in such a way that we perceive light of short wavelength as purple and light of long wavelength as red. This association of false brain-generated sensations with the realities around us also takes place for the senses of sound, smell, taste, and touch. So to wrap it up, what you see, hear, smell, taste, and touch is false, just like the sounds a Geiger counter makes or the color of the images of the JWST, but not fake, because these things are all correlated to reality. Welcome to the amazing realm of perception! The image of the trains of electrical impulses belongs to the author and can only be used with permission. The image of the Cosmic Cliffs, a star-forming region of the Carina Nebula (NGC 3324), is by NASA and the Space Telescope Science Institute (STScI), and is in the public domain.
0 Comments
Scientists know that one important way to gain insight into how the human mind works is by observing what happens when it experiences a disfunction. In this regard, scientists have documented some rare but remarkable neurological conditions and diseases. The normal function of the brain may be altered as a result of a surgical procedure. Split Brain In the second half of the twentieth century, doctors carried out a radical surgery that severed the connections between the two hemispheres of the brain to treat uncurable seizures. While this procedure was effective at stopping the seizures, it would leave both brain hemispheres unable to share information and coordinate with each other. These split-brain patients have been objects of much research that has allowed scientists to gain insight into how the brain normally works. In humans, the right hemisphere receives nerve inputs from and controls the left side of the body, whereas the left hemisphere does likewise with the right half. With regards to vision, the left hemisphere receives information about the right side of the visual field (it cannot see the left side) with the opposite happening with the right hemisphere. Using this information, scientists were able to relay questions to the subjects and make them perform tasks in such a way that they could discriminate between the responses and performance of the right and the left hemispheres, and what they found was amazing. They discovered that the two hemispheres often displayed separate personalities and held different beliefs with one split brain patient famously answering the question “Do you believe in God?” with a “Yes” by one hemisphere and a “No” by the other. They also found that the communication between both hemispheres was required for exercising normal moral judgement in fast answers to specific questions. The normal function of the brain may also be altered as a result of a disease or a traumatic event. Cotard's Syndrome This one is also called, “The Walking Corpse Syndrome” because the individuals afflicted by it develop what are called nihilistic delusions in which they perceive themselves, or some of their body parts, to be dead, dying, or to not exist at all. People with this affliction will reduce food intake or stop eating because they have no use for this activity as they believe themselves to be dead after all. Some patients will spend an inordinate amount of time in cemeteries. Cotard’s Syndrome is often associated with other conditions ranging from severe depression to neurological conditions and diseases. Alien Hand Syndrome In this curious disease, one of the hands of a person will start moving on its own without the person being able to control it. The hand will perform purposeful tasks sometimes repeatedly, and may even antagonize things the other hand has just done such as buttoning a button. Alien Hand Syndrome is associated with conditions that cause trauma to the brain and neurodegenerative diseases. Dissociative Identity Disorder This disorder, which in the past was called Multiple Personality Disorder, is a situation where the identity of a person is split among at least two separate identities that take control over the individual. Each identity may have their own name, sex, race, and psychological and physical characteristics. Dissociative Identity Disorder is associated with the suffering of psychological trauma specially during childhood. Apotemnophilia People affected with Apotemnophilia have an overwhelming desire to amputate their limbs, with some expressing a wish to be paralyzed. Some of the afflicted individuals perform the amputations themselves or ask their friends, relatives, and health practitioners to help them with the process. Although scientists still don’t know what causes the condition, a possible explanation is a faulty representation of the limbs in the areas of the brain that deal with self-recognition of body structures. Boanthropy, Clinical Lycanthropy, and Clinical Zoanthropy Boanthropy is a condition where the affected persons believe themselves to be a cow or ox. They will actually go over to pastures, get down on all fours, and eat grass. Clinical Lycanthropy involves people who think they are turning into werewolves. These people see their bodies covered with hair and their fingernails and teeth elongated. These two conditions are cases of the broader disease called Zoanthropy where people believe themselves to have turned into various animals. Some of these conditions are associated with diagnosed mental diseases such as schizophrenia, bipolar disorder, or severe depression. So, what does the information I presented above reveal about ourselves and our brains? Most people accept that the brain plays a key role in movement. Individuals who break their spinal columns may be unable to move their limbs. I have posted a video of the time I had Bell’s Palsy where a malfunction in a nerve paralyzed half of my face. Similarly, most people accept the role that the brain plays in processes such as learning, perception, and memory, and there are many examples of accidents or diseases that have led to impairment of these processes. Nevertheless, despite the acceptance of the role of the brain in determining the processes described above, many people believe the human mind is something special. These people believe that there is something else, whether you call it soul, spirit, essence, or any other such term, that is responsible for the most fundamental aspects of the human mind which they believe cannot just merely be produced by a bunch or nerve cells. However, the pathologies or conditions I have listed in this post and others indicate to us the importance of the brain not just in determining obvious things like movement, learning, perception, and memory, but also in determining things that lie at the very core of our humanity such as who we are, how we see ourselves, and what we believe in. If a bunch of nerve cells can create movement, learning, perception, and memory, why can’t they also determine our very nature? Much in the same way that the intestines can create digestion, why can’t the brain create the mind? Of course, science cannot say anything about the existence of a soul, essence, or spirit, but what we can say at this moment is that all the evidence we have so far indicates that everything that you are seems to be nothing more than the result of millions of nerve cells in your head communicating with each other in different patterns and at different times. Boggles the mind, eh? The image is from Pixabay is free for commercial use. A long time ago, back when I was a young teenager, my mother bought the book The Exorcist by William Peter Blatty. This book is about the possession of a girl by a demon and her subsequent exorcism. My mother placed it in our book room. However, she thought that it contained things that were not appropriate for my age. So she called me over and explained that she had bought this book, but she did not want me to read it — Mom, really? Needless to say that as soon as I had a chance, I made a bee line for the book room and read the book: bad idea. The descriptions and the language in the book terrified me. I could not get the images and words out of my mind, and for about a week I did not sleep well. As soon as I turned off the light every sound and shadow in the room acquired a sinister nature, and I was be consumed by fear. At times I thought I saw things moving about my room. At times I thought I heard voices. It was really creepy, and the worst thing is that I could not tell my mother because she would figure out I had read the book! However, by the time I got to see the movie based on the book, I had gotten my act together enough to see the film without losing my composure. Fast forward 20 years or so. I studied biology in college and later obtained a Ph.D. in Nutrition with a major in Biochemistry. I learned the ways of science and how matter and energy in this world operate based on specific physical, chemical, and biological principles. I published a weekly newspaper column entitled “The Scientific Truth” that dealt critically with pseudoscience and the paranormal. I still remembered my Exorcist-induced week of fright, but I interpreted what had happened to me under a whole new light. What happened to me was due to the fact that human perception is not a passive event. We do not merely take input from the environment to directly construct our perception of the world around us, but rather we are constantly interpreting this input based on a set of parameters that the brain applies to make sense of reality, and these parameters can be changed by experience. That day so many years ago, I was exposed to very strong stimuli that reshaped the perception of reality by my brain. The noises and shadows in my bedroom at nighttime had not changed from the way they had always been, but my brain reinterpreted them in light of the new information obtained from reading the book and made me fear them. Fear is often a useful emotion that can keep us from harm, but when fear is too intense or not based realistic premises, it can have paralyzing and unhealthy effects. I reasoned that my fear that night was a result of ignorance. Despite the claim that Blatty’s book was based on a real exorcism, not a single case of demonic possession has ever been conclusively demonstrated to be anything but mental illness. In the exorcisms that have taken place, objects don’t fly, lights don’t flicker, bodies don’t levitate, etc. The occurrences taking place in these events are within the realm of what’s possible when people experience mind-altering diseases. My fear that night was unwarranted. I felt a bit silly for having experienced it at all, and rolled my eyes at the gullibility of my former younger self. So it happened that I found myself carrying out research that involved periodic trips to a faraway town by the sea, where I worked at a small research station. In one of these research trips, I was the only scientist working at the station. After I had been working for most of the day, there was a failure in the electric grid and the lights went out towards the late afternoon. Since my workroom didn’t have any windows, and I just had a rudimentary flashlight, I decided to call it a day. I had a quick dinner and headed into town right before dusk. There I came upon some of the local fishermen who had gathered around an improvised log fire. A couple of them worked with the research station, and I sat with them. The fishermen shared some of the local stories of the town’s past, and then as it got darker, they started telling ghost stories! For the next two hours next to the flickering light of the fire and under a sky faintly lit by a crescent moon, I heard these adults talk about things they had seen or heard during their lives. The lore included screams and moans of unknown origin coming from the mountains adjacent to the town, strange vaporous figures floating around at night, things hovering over the sea waters or lurking just beneath them, open graves with missing corpses at the local cemetery, the doom that had befallen some people cursed by an alleged local witch, etc. I alternated between being amazed and amused. I didn’t know to what extent these people were exaggerating their stories, but most of them seemed very convinced that they were true. I knew that groups of skeptics had been systematically investigating one claim after another of ghosts, witches, paranormal occurrences and whatnot for decades finding nothing that could not be explained by science. However, I did not want to be disrespectful. These fishermen were bonding and apparently having a good time, so I kept my mouth shut. After the group dissolved, I went back to the research station. It was quite dark and the silver glow of the moon gave the surrounding landscape a surreal pale phosphorescent tinge. Inside the research station it was pitch black and the faint light of my flashlight barely helped me make my way along the corridor that led to my bedroom. The shadows created by my flashlight seemed to move in strange ways, and I became aware of noises that I didn’t remember hearing before. Was there something lurking in the darkness beyond the glow of my flashlight? Was it moving towards me? The same sensations I had experienced 20 years ago came back in full force. This time, I was older. I knew better. I was not ignorant. I was not gullible, and yet, I was caught again in the grip of fear. Inside my brain an ancient program had been activated. A program derived from our animal ancestors, created by the forces of evolution, and amplified by superstition and ignorance. A program that for thousands of years made us fear what lay beyond the cave entrance or the perimeter of the campfire, even if there was nothing there. And I could not shut it down!
Thankfully an emotion stronger than fear came to my rescue: anger. I became extremely angry because, although I understood exactly what was happening to me, I was not able to control it. As I made my way along the dark corridor to my bedroom, I clenched my fist, waved it at the darkness, and screamed, “I’m a scientist”! This sounds stupid today, but that day it worked. I was able to counteract my fear with sheer outrage at how silly I felt at being manipulated by my own brain. After a couple of hours of more fist clenching, I was able to force myself to sleep. Next day the electricity returned, and that night I fell asleep uneventfully. What I understood after this experience, is that mere knowledge and/or conviction that something does not exist and can’t harm us does not immunize us against fearing it. We have all grown up within a culture that through oral stories, movies, books, and other means has conditioned our brains to accept at a very primal level that things like demonic possession, ghosts, and other fictitious entities or occurrences exist, can harm us, and should be feared. This conditioning can at times manipulate us like puppets and make us feel things that we are not justified in feeling from a rational point of view. But at least now I understand this: I am a scientist. The cover of the book The Exorcist and the poster of the movie are copyrighted and used here under the legal doctrine of Fair Use. The ghost picture by Alexas_Fotos is from Pixabay and is licensed for public use. Albert Einstein was one of the towering figures of the twentieth century. Images of his face surrounded by a fluffy mat of white hair have become synonymous with genius in both popular and scientific cultural circles. Einstein’s contributions in many areas of physics range from the explanation of the photoelectric effect, the equivalence of energy and mass as defined in his famous E = MC2 equation, his reinterpretation of Newtonian mechanics (special theory of relativity), and his description of gravity as a property of space and time (general theory of relativity), to his work towards elucidating the nature of light, the existence of the atom, and the establishment of quantum mechanics. He predicted the existence of Bose-Einstein condensates, gravitational lensing, and gravitational waves, all of which have been confirmed. Einstein’s ideas resulted in practical applications in areas such as nuclear power, space travel, fiber optics, GPS, and computers, but his thinking has also influenced disciplines as varied as philosophy, art, and literature. It is not an overstatement to assert that this individual changed the world. As we stand in awe admiring Einstein’s accomplishments, we wonder where all that came from. Ideas, inspiration, and creativity are attributes that we associate with one organ in the body: the brain. Therefore, if Einstein was able to think all of these things up, but most other human beings are incapable of such feats, it follows that there must be something special about Einstein’s brain: something that differentiates it from the brain of common individuals, something that makes him vastly smarter. The reasoning then goes: if we find that something, that difference between Einstein’s brain and the brain of common folk, we will have found the nature of genius, the seat of intelligence. Einstein may have been curious about the above line of reasoning, but there is one thing we know for certain: he was aware of his fame, and he did not want to be idolized after his death. He left instructions that his body be cremated and his ashes scattered. However, when Einstein died in 1955, the temptation was too much for the Princeton Hospital pathologist that performed the autopsy; Thomas Harvey. Going against the wishes of Einstein’s family, Harvey removed Einstein’s brain. When the family found out, there was some acrimony, but Harvey managed to obtain permission from Einstein’s oldest son to keep the brain as long as it was used for scientific studies. When the hospital administrator found out what he did, Harvey was fired. He went on to practice medicine but eventually was not able to renew his medical license. His marriage ended in divorce, and he ended up working on the assembly line of a plastics factory to pay his bills. Harvey took pictures and measurements of Einstein’s brain, and he had parts of the brain sectioned into many pieces that were treated with dyes and sealed in slides. Over the span of decades, he tried to get scientists interested in looking at the brain, but found few takers. He sent slides to several eminent scientists, but they found nothing out of the ordinary. Eventually, a few scientists became interested enough to perform detailed studies of the morphology of the brain using the slides and photos of the intact brain that Harvey had taken, and some differences were found when comparing Einstein’s brain to those of regular people. However, whether these differences are related to Einstein’s intellectual prowess, or whether they are part of normal brain to brain variation that can be encountered among individuals, is still an open question. There is the possibility that genius is not correlated with an obvious gross morphological characteristic of brains, but rather to the more subtle ways in which its billions of neurons are interconnected. There is also the possibility that genius resides in the temporal and spatial pattern in which the neurons interact with each other. If this is the case, it is likely that nothing can be learned about genius from fixed dead brains, as only detailed imaging of living brains will reveal their secrets. Today, the physiological roots of intelligence and genius remain as elusive as ever. And perhaps that is a good thing. Since human beings began to study the brain, we have woven fanciful explanations as to the nature of intelligence. First brain size was the metric that was thought to be related to intelligence. Thus, when evidence was produced that some races had different brain sizes than others, this became the linchpin of ideas regarding the superiority and inferiority of these races. When the idea that brain size among normal human beings was correlated to intelligence fell into disrepute, we developed the idea that we could test for intelligence and reduce it to a number. Then soon enough evidence was generated that some races, or social classes, or groups of people, did better on intelligence tests than others, again leading to notions of superiority and inferiority. The eugenics movement arose in the early twentieth century advocating the betterment of the intellectual level of society. In the United States, terms like “feeble-minded” and “moron” were developed to denote people who scored poorly on intelligence tests. The eugenics craze in the United States led to restrictions on immigration and forced sterilizations of tens of thousands of people, which invariably affected to a greater extent those who were poor, uneducated, and from minority groups. All these terrible ideas are discredited nowadays, at least academically, but watered down versions of these ideas still linger among the general public and some scholars. Asking questions about what made Einstein extremely intelligent are legitimate. But my fear is that if we do discover some characteristic in the brain responsible for genius, let’s call it “X”, we will again travel down this well-worn step by step path: 1) “X” made Einstein a genius. 2) Therefore, the more of “X” you have, the more intelligent you are. 3) We can classify otherwise normal individuals into whether they are more or less intelligent based of the amount of “X” they have. 4) Groups of people displaying lower levels of “X” are dumber. 5) Groups of people displaying lower levels of “X” are inferior. 6) Allowing these groups of people with less “X” to breed or to come to our country will compromise our society. 7) We must do something about it. A while ago I did something that Einstein would have disapproved of. I went to the National Museum of Health and Medicine in Silver Spring, Maryland. As part of what they call “Brain Awareness Week” they had an exhibition on Einstein’s Brain. The museum has a set of slides made from Einstein’s brain that Harvey’s estate donated after Harvey died, and some of them were on exhibit. So I took a picture (see below). This section of Einstein’s brain was stained for myelin, the protein that ensheaths the axons of neurons. The black area is the fiber tracks (the white matter in the brain), and the brown area is where the neuronal cell bodies reside (grey matter). There is (or was) something in sections like these that made possible the thinking of ideas that no one had ever thought before, revolutionizing our view of reality, and changing our world forever. What is or was that something? And if we ever discover it, what will we do with that knowledge? The photograph of Albert Einstein by Orren Jack Turner obtained from the Library of Congress is in the public domain. The image of Einstein’s brain was cropped and modified from the article: Falk, Dean, Frederick E. Lepore, and Adrianne Noe. "The cerebral cortex of Albert Einstein: a description and preliminary analysis of unpublished photographs." Brain 136.4 (2012): 1304-1327, and is used here under a NonCommercial 3.0 Unported (CC BY-NC 3.0) license. The image of the section of Einstein’s brain is by the author and can only be used with permission. In this blog I have pointed out that there is a progression in emerging fields of scientific inquiry where competing theories are evaluated, those that do not fit the evidence fall out of favor, and scientists coalesce around a unifying theory that better explains the phenomena they are studying. However, even as a new theory that better fits the available data is accepted in the field, there are individuals who contest the newfound wisdom. Instead of accepting the prevailing thinking, these individuals buck the trend, think outside the box, and propose new ways of interpreting the data. I have referred generically to individuals belonging to this group of scientists that “swim against the current” as “The Unreasonable Men”, after George Bernard Shaw’s famous quote, and I have stated that science must be defended from them. The reason is that science is a very conservative enterprise that gives preeminence to what is established. Science can’t move forward efficiently if time and resources are constantly diluted pursuing a multiplicity of seemingly farfetched ideas. However, this is not to say that the unreasonable man should not be heard. There are exceptional individuals out there who have revolutionary ideas that can greatly benefit science, but there is a time for them to be heard. One such time is when the current theory fails to live up to expectations. I am writing this post because such a time may have come to the field of science that studies Alzheimer’s disease (AD). Alzheimer’s disease is a devastating dementia that currently afflicts 6 million Americans. The disease mostly afflicts older people, but as life expectancy keeps increasing, the number of people afflicted with AD is projected to rise to 14 million by 2050. The disease is characterized by the accumulation of certain structures in the brain. Chief among these structures are the amyloid plaques, which are made up of a protein called “beta-amyloid”. The current theory of AD pathology holds that it is primarily the accumulation of these plaques, or more specifically their precursors, which is responsible for the pathology. Therefore, it follows that a decrease in the number of plaques should be able to alleviate or slow down the disease. This has been the paradigm that pharmaceutical companies have pursued for the past few decades in their quest to treat AD. Unfortunately, this approach hasn’t worked. For the past 15 years or so, every single therapy aimed at reducing the amount of beta-amyloid in the brain has led to largely negative results. In fact, some patients whose brains had been cleared of the amyloid deposits nevertheless went on to die from the disease. Several arguments have been put forward to explain these failures. One of them is the heterogeneity in the patient population. Individuals that have AD often have other ailments that may mask positive effects of a drug. According to this argument, performing a trial with patients that have been carefully selected stands a greater chance of yielding positive results. Another argument is the notion that many past drug failures have occurred because the patient population on which they were tested was made up of individuals with advanced disease. According to this argument, drugs will work better with early-stage AD patients that have not yet accumulated a lot of damage to their brains. Even though many researchers still have hopes that modifications to clinical trials like those suggested above will have the desired effect as predicted by the amyloid theory, an increasing number of investigators are considering the possibility that this theory is more incomplete that they had anticipated and are willing to listen to new ideas and open their minds to the unreasonable man. One example of these men is Robert Moir. For several years he has been promoting a very interesting but unorthodox theory of AD and getting a lot of flak for it. He dubs his hypothesis “The antimicrobial protection hypothesis of Alzheimer’s disease”. According to Dr. Moir, the infection of the brain by a pathogen or other pathological events triggers a dysregulated, prolonged, and sustained inflammatory response that is the main damage-causing mechanism in AD. In this hypothesis, the production and accumulation of the amyloid protein by the brain is actually a defense mechanism! Dr. Moir agrees that sustained activation of the defense response will lead to excessive accumulation of the amyloid protein and that this eventually will also have detrimental effects. However, even though reduction in amyloid protein levels may be beneficial, accumulation of the amyloid protein is but one of several pathological mechanisms. Moir stresses that the main pathological mechanism that has to be addressed by AD therapies is a sustained immune response, which over time causes brain inflammation and damage. He considers that accumulation of the amyloid protein is a downstream event, and it is known that the brain of people with AD exhibits signs of damage years before any amyloid accumulation can be detected. But much in the same way that Dr. Moir has been promoting his unconventional theory, there are many other theories proposed by others. Oxidative stress, bioenergetic defects, cerebrovascular dysfunction, insulin resistance, non-pathogen mediated inflammation, toxic substances, and even poor nutrition have been proposed as causative factors of AD. This is the big challenge that scientists face when opening their minds to the arguments of the unreasonable man: there is normally not one but many of them! So who is right? Which is the correct theory? And why should just one theory be right? Maybe there is a combination of factors that in different dosages produce not one disease but a mosaic of different flavors of the disease. And maybe the amyloid theory is not totally wrong, but just merely incomplete, and it needs to be expanded and refocused. Or maybe the beta-amyloid theory is indeed right and all that is required for success is to tweak the trial design and the patient population. Maybe, maybe, maybe… When a scientific field is beginning, or when it looks like a major theory in a given field is in need of reevaluation, there always is confusion and uncertainty. Scientists in the end will pick the explanation(s) that better fits the data and take it from there. They did that when most scientists accepted the amyloid theory and they will do it again if this theory is found wanting. The new theory that replaces the amyloid theory will not only have to explain what said theory explained, but it will also have to explain why the old theory failed and what new approach must be followed to successfully treat the disease. In the meantime, Dr. Moir’s theory, along with a few others, is the center of focus of new research evaluating alternative theories to explain what causes AD. The amyloid theory or aspects of it may still be salvageable, but in the field of AD it certainly looks like the time for the unreasonable man has come. Note: after I posted this, I became aware of an article published in the journal Science Advances that proposes a link between Alzheimer's Disease and gingivitis (an inflammation of the gums). The unreasonable men are restless out there! The image is a screen capture from a presentation by Robert Moir on the Cure Alzheimer’s Fund YouTube channel, and is used here under the legal doctrine of Fair Use.The brain image from the NIH MedlinePlus publication is in the public domain. A lot of books, movies, and even video games employ the motif of the living dead. All of this is, of course, fiction, but have you ever wondered whether there is something to it? In the Haitian Voodoo religion, zombies are believed to be corpses that have been reanimated through witchcraft by a sorcerer called a “Bokor.” These zombies are nothing like the ones shown in movies like “Night of the Living Dead”, but still their existence has always been the mainstay of myth and legend. In 1982 the peculiar case of Clairvius Narcisse was brought to the attention of Drs. Nathan Kline and Lamarque Douyon. Narcisse had died and been certified as dead by an American doctor working in Haiti. The thing is that 18 years after his death he showed up in his village very much alive. He claimed that he had been paralyzed, declared dead, and buried alive. Then a Bokor disinterred him and made him work on his plantation. Drs. Kline and Douyon studied his case carefully and concluded that the man was indeed who he claimed to be. At the request of Dr. Kline, Wade Davis, a Harvard graduate student in ethnobotany, travelled to Haiti to try to study what components go into the potions used by Bokors to make zombies. As a result of his studies he claimed that zombies were a reality and even put forward a scientific explanation of their existence! Wade found that one of the common ingredients in the zombie poison is the puffer fish. The internal organs of this fish contain a poison called tetrodotoxin (TTX). Although TTX can kill, in small amounts it can paralyze a person while they remain conscious. In Japan where a similar fish (the fugu) is a gourmet delicacy, there are stories of people that ate the fish prepared improperly, became paralyzed, and were almost buried alive after being declared dead. So the zombification would work like this. The Bokor rubs his potion on a person’s skin or preferably into a superficial wound. If the right amount of TTX gets into the body, the person is paralyzed, declared dead, and buried. The Bokor must then unbury the person before he/she dies from lack of oxygen. The disinterred person is then beaten and fed mind altering drugs (notably the zombie’s cucumber: Datura) to keep them docile. The whole process is reinforced if the person believes that he/she is actually being turned into a zombie. Davis published his findings and theories in the Journal of Ethnopharmacology in 1983 and in the book “The Serpent and the Rainbow” in 1985. Unfortunately, scientists analyzing the zombification powders Davis brought back from Haiti did not find any TTX in them and could not elicit any symptoms of poisoning when they rubbed said powders into the skin of rats. This was followed by a series of attacks and claims and counter claims between Davis and his critics that left his particular theory hopelessly mired in disrepute, and no further attempts have been made to readdress it. But Davis at least raised the possibility that what is called a zombie in these cultures is not, of course, a reanimated corpse, but rather a product of the synergism between mind and chemistry. Other scientists have taken this line of inquiry and studied zombie-like behavior induced by drugs or other agents and documented several cases. This alternative is no doubt less satisfying for all the fans of the nightmarish beings that hunger for the flesh of the living in popular culture. But if you want horror and ghoulish things look no further than the world of nature. What would you think about zombie cockroaches? The Jewel Wasp hunts cockroaches and makes them docile (zombifies them) by injecting venom into their brains. It then leads the cockroach into a place where it will lay an egg on it. The wasp then seals the cockroach in. After a while a larva hatches from the egg and proceeds to eat the drugged insect alive! You can watch the wasp’s grisly work in the video below. It is not anything that George Romero has dreamed up (yet), but it’s real! The photograph of a zombie by Daniel Hollister is used here under an Attribution 2.0 Generic (CC BY 2.0) license. In his excellent 1994 book, The Astonishing Hypothesis, the late Nobel Prize winning scientist, Francis Crick (co-discovered of the structure of DNA with James Watson), put forward a hypothesis that boggles the mind. He wrote: “You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.” He claimed that this hypothesis is astonishing because it is alien to the ideas of most people. This is presumably because, when it comes to our mind, we believe that there is something special about it. Clearly the mind is more than the product of the activity of billions of cells, no? Exalted emotions such as love and compassion and empathy or belief in the divinity or free will cannot just be a byproduct of chemical reactions and electrical impulses, right? But why would that be the case? Consider an organ like the intestine. It’s made up of billions of cells that cooperate to produce digestion. Most people will agree with the notion that the intestine produces digestion. So, if we can accept that the cells that make up the intestine produce digestion, why can’t we accept that the cells that make up the brain produce the mind? Let’s just touch on something simple, but that nevertheless goes to the very core of our notions of free will and consciousness. Consider an action such as performing the spontaneous motor task of moving a finger to push a button. In our minds we would expect that this and other such actions entail the following sequence of events in the order specified below: 1) We become aware (conscious) that we want to perform the action. 2) We perform the action. But what goes on in our brains even before we become aware that we want to perform the action? Many people would guess: nothing. Whatever brain activity occurs associated with the action must logically occur after we become aware that we are going to perform the action. After all, how could there possibly be nerve activity associated with an action that we are not even yet aware that we want to perform? Warning! Warning! - Insert blaring alarms and rotating red lights here - Fasten your existential seat belts because this ride is about to get bumpy! In 1983 a team of researchers led by Dr. Benjamin Libet carried out a now famous experiment to evaluate this question. The researchers recorded the electrical activity in the brains of test subjects which were asked to perform a motor task in a spontaneous fashion, and they also asked the subjects to record the time at which they became aware that they wanted to perform the task. The surprising result of the experiment was that, while the awareness of wanting to perform the task preceded the actual task as expected, the electrical cerebral activity associated with the motor task performed by the subjects preceded by several hundred milliseconds the reported awareness of wanting to perform the task! This amazing experimental result has been replicated by other researchers employing different methodologies. One study employing magnetic resonance to image brain activity stablished not only that the brain activity associated with the task is detected in some brain centers up to 7 seconds before the subject becomes aware of wanting to perform the action, but also that decisions based on choosing between 2 tasks could be predicted from the brain imaging information with an accuracy significantly above chance (60%). Delving even deeper into the brain, another group of researchers recorded electrical activity from hundreds of single neurons in the brains of several subjects performing tasks and found that these neurons changed their firing rate and were recruited to participate in generating actions more than one second before the subjects reported deciding to perform the action. The researchers could predict with 80% accuracy the impending decision to perform a task, and they concluded that volition emerges only after the firing rate of the assembly of neurons crosses a threshold. The interpretations of these types of experimental results have triggered a debate that is still ongoing. The most unsettling interpretation is that there is no free will (i.e. your brain decides what you are going to do before you even become aware you want to do it). However, there are many critics that claim that there are technical flaws in the experiments, that the data is being overinterpreted, that the electrical activity detected is merely preparative with no significant information about the task, or that it is a stretch to extrapolate from a simple motor task to other decisions we make that are orders of magnitude more complex. In any case the question of whether free will exists is in my opinion irrelevant because our society cannot function under the premise that it doesn’t. What interests me from the point of view of the astounding hypothesis, is the possibility that the awareness of wanting to perform an action before we perform it is merely an illusion created by the brain. This notion is not farfetched. As I explained in an earlier post, the brain creates internal illusions for us that we employ to interact with reality. Colors are not “real”, what is real is the wavelength of the light that hits our eyes. What we perceive as “color” is merely an internal representation of an outside reality (wavelength). The same goes for the rest of our senses. As long as there is a correspondence between reality and what is perceived, what is perceived does not have to be a true (veridical) representation of said reality. Consider your computer screen. It allows you to create files, edit them, move them around, save them or delete them. However, the true physical (veridical) representation of what goes on in the computer hard drive when you work with files is nowhere near what you see on your screen. This is so much so, that some IT professionals refer to the computer screen as the “user illusion”. So, much in the same way that the brain creates useful illusions like colors that allow us to interact with the reality that light has wavelengths, or the computer geeks create user illusions (file icons) that allow us to interact with the hard drive, could it be that the awareness of wanting to perform actions, in other words, becoming conscious of wanting to do something, is just merely an illusion that the brain creates for the mind to operate efficiently? We are still in the infancy of attempts to answer these questions, but what is undeniable is that the evidence indicates that there is substantial brain activity taking place before we perform actions that we are not even yet aware we wish to perform, and that this brain activity contains a certain degree of information regarding the nature of these actions. As our brain imaging technology and our capacity to analyze the data gets better, will we be able to predict with certainty what decision a person will make just by examining their brain activity before they become aware they want to make the decision? It’s too early to tell, but from my vantage point it seems that so far Crick’s astonishing hypothesis is looking more and more plausible. The image of the cover of the book The Astonishing Hypothesis is copyrighted and used here under the legal doctrine of Fair Use. The Free Will image by Nick Youngson is used here under an Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) license. In case you don’t follow rock climbing as a sport, you should know that last year the “moon landing” of rock climbing happened: Alex Honnold Free-soloed El Capitan in Yosemite! In case you don’t know what this means, let me break it down for you. El Capitan This is a massive granite monolith in Yosemite national park in California that rises vertically from the floor of Yosemite Valley a distance of almost 3000 feet. It is considered one of the ultimate big-wall climbs in the world, and is a central part of a rock climbing culture in the US replete with wild anecdotes, traditions, and larger than life characters. Free soloing There are several styles of rock climbing. Most climbers making their way up very difficult walls insert gear in the cracks in the rock and attach ropes to the gear in order to hoist themselves up. Some of the best climbers in the world often use a style called Free Climbing where they just use their hands and feet to climb the wall and employ gear and ropes only for protection in case of a fall. Free soloing is the most dangerous variety of climbing. In free soloing, climbers free-climb the rock but without using any gear or ropes for protection. If they make a mistake, they fall and die. Alex Honnold Alex Honnold is a US rock climber who specializes in free solo climbing, and who has dazzled the world with some of the most daring big wall climbs ever attempted by a human being, all without a rope to hold him in case he falls. So, put the three above together. Yes, Alex Honnold climbed the massive hulk of El Capitan using no ropes for protection in case of a fall. Any distraction, any slip, or any handhold or foothold that broke apart would have sent him careening towards the valley floor below. If you want to get an idea of what it means to climb El Capitan you can check out the following Google Maps link. But why am I writing about rock climbing if this is a science blog? The reason is the following.
From what I have described above you would think that Alex Honnold is the best climber in the world, right? Actually, this is not true, and even he admits it. Of course, there is no doubt that Alex is an elite climber, among the best in the world, but there are many other climbers that are technically more gifted and stronger than Alex when it comes to climbing. Any of these climbers can climb what Alex climbs and then some (but here is the key distinction) as long as they are protected by a rope. Place any of these climbers thousands of feet above the ground on a vertical rock face with nothing to prevent them from falling, and they will be unable to climb stretches of rock that they would have normally strolled over. They would likely panic and fall to their deaths. This is where Alex excels above all rock climbers in the world: he is able to control fear and focus on the climb. The late Dean Potter, who was also a free soloist, best described this challenge when he was trying to walk a tightrope in Yosemite. Potter said that if he suspended the rope a few feet above the ground, he had no problem in crossing it. However, when he suspended exactly the same stretch of rope thousands of feet above the valley floor he would often fall when trying to cross it. He correctly deduced that success in walking the rope depended on the mind, and after a few tries with a safety harness he was able to successfully accomplish this feat unprotected several times. When rock climbers are exposed to situations where they deal with increased risk, their stress hormone levels and anxiety are increased. It is in these situations that an area of the brain called the amygdala, which is involved in the production of the sensation of fear, is activated. Moderate activation of the amygdala is often a good thing. There are people with a condition called Urbach-Wieth Disease where the amygdala becomes calcified and ceases to work. As a result of this, these people can’t experience fear, and this is a big problem in their lives. Fear is a healthy response to many things, and it keeps us away from danger. However, overactivation of the amygdala can lead to panic, and loss of control and capacity for rational thought. Most rock climbers will tell you that the best place to panic is not while clinging to a near vertical section of a rock wall on tiny handholds and footholds thousands of feet above the ground. Some neuroscientists became interested in Alex and convinced him to allow them to perform an experiment. The scientists viewed his brain with magnetic resonance imaging while he was being shown a series of ghastly images designed to activate a normal person’s amygdala. As a control the scientists also imaged the brain of another rock climber. The results indicated that although Alex does have an amygdala in his brain, it was not activated by the images. However, the amygdala of the control climber lit up as expected. The scientists speculated that either Alex’s amygdala doesn’t activate normally, or other brain regions are able to inhibit its activation. Alex rejects the notions that he doesn’t experience fear while rock climbing without a rope for protection. Nevertheless, he claims that he can just put it aside without allowing it to get in the way of focusing on the climb. This ability is key for achieving what he has done. His free-solo of El Capitan required him to maintain his concentration for almost 4 hours of climbing. This allowed him to accomplish the feat without a single mistake, which was vital as only one mistake could have gotten him killed. The above not only highlights how the differences in wiring in our brains can make us experience the reality around us in very different ways, but also the role that emotions such as fear can have in our life. Fear is, of course, not only restricted to rock climbing. Will I develop a serious health problem? Does so and so love me? Are my children safe? Will I get mugged? Will I keep my job? Should I get involved in this business? Will my economic situation improve? What will the president tweet next? Uncertainty about these and many other situations can generate anxiety, stress, and fear of different levels of intensity, some of which can produce inappropriate responses that will hurt rather than help us. Fear and stress can also cause alterations in the wiring of the brain that can affect our behavior even when the events that triggered the fear are not present anymore such as in post-traumatic stress disorder (PTSD). This is why the study of fear and related phenomena is an active area of research in the biological and psychological sciences. Going back to Alex, his climb of El Capitan will be featured in a National Geographic movie (yes, it was filmed!) entitled, Free Solo, that will be released this fall on select theaters. Alex has also written a book, Alone on a Wall, where he details his many climbing exploits before his monumental El Capitan climb. Note: the film about Alex's free solo climb won an Oscar for best documentary. El Capitan image by Mike Murphy is used here under an Attribution-Share Alike 3.0 Unported license. The image from Alex’s website is displayed here under the legal doctrine of Fair Use as described on Section 107 of the Copyright Act. Let’s face it. We don’t hold the contents of our intestine in high regard. They are unsightly, malodorous, and when not disposed of appropriately in areas with high concentrations of people, they can lead to disease. The less time spent in their presence, the better. In fact, the whole philosophy behind toilets seems to revolve around giving us the power to immediately make our droppings disappear with the flip of a handle. Our disgust with what comes out of the business end of our intestine is even reflected in our language where we have a large number of epithets to equate worthless objects or persons we find to be truly despicable with, well…excrement. All this negative focus is a great shame as the lowly turd is a vital part of the exams that clinicians perform to diagnose a host of diseases, as was pointed out (with some exaggeration) in the famous “Poo Song” in the television series “Scrubs”. However, our dung has very important functions in both health and disease beyond serving as mere diagnostics. With some hindsight this concept seems obvious, considering that the number of bacteria in our bowels is the same as the number of cells in our bodies, and that all these bacteria and other microorganisms actively metabolize foodstuffs in close proximity to the lining of our intestines. But, only recently have scientists begun performing in depth studies of the functions of what is formally called the “intestinal microbiome”. What they are discovering is amazing. For example, the US and other industrialized nations are currently dealing with an obesity epidemic, and scientists have found that the intestinal microbiome plays a role. As it turns out the bacterial makeup of the intestinal contents of obese and non-obese people is different. Obese people seem to have bacteria that promote obesity! Scientists working with germ-free mice have found that these animals are resistant to obesity caused by a high-fat diet. Furthermore, by recolonizing these mice with specific strains of bacteria, scientists have found that not only do some bacteria promote obesity, but others protect against the effect of the obesogenic bacteria. The mechanisms by which this happens are not yet clear but gut bacteria may modulate the levels of satiety hormones released from the intestine or may affect the immune response and the physiology of adipose tissue by means of bacterial components that leak through the lining of the intestine into the blood. This suggests that we can reduce a person’s propensity for obesity by populating their intestinal tract with the right type of bacteria. Another finding from studies with germ-free animals is that these animals have changes in their behavior compared to animals with an intact intestinal microbiome. That’s right: the levels of several brain molecules that regulate mood and cognition can be affected by the makeup of the bacterial content of the gut! In fact, in humans there are various conditions involving changes in gut bacteria such as irritable bowel syndrome that are accompanied by feelings of anxiety and depression, and certain psychiatric conditions are also believed to be affected by the makeup of the bacteria of the gut. This again suggests that if we find the right bacterial combination to introduce into the gut, we may be able to positively regulate brain function. But the effect of gut bacteria doesn’t stop here. Gut bacteria also are modified in people suffering from several maladies such as cardiovascular disease, diabetes, cancer, and others. Gut bacteria may even hinder or enhance the effects of certain drugs. The influence of the intestinal microbiome on the human body has led some scientists to claim that it should be considered an independent organ just like the liver or the pancreas. Unfortunately the intestinal microbiome is a fiendishly complicated association of thousands of strains of bacteria and other microorganisms interacting with one another and with the cells of the intestine, and the specific bacterial makeup of the microbiome varies from one person to another and can change with diet. Despite all the claims made by pre- and pro-biotic companies we still do not have a widely applicable way of modifying the intestinal microbiome to achieve specific effects on human health, but this is an active area of investigation. Photo of Escherichia Coli bacteria from the Rocky Mountain Laboratories, NIAID, NIH, is the public domain. |
Details
Categories
All
Archives
August 2024
|