Who perceives the meaning behind the medium?
Elon Musk has a rule: whenever he’s in a hot tub, he won’t speculate about whether he’s in a “real” hot tub or is instead in a simulated hot tub inside the video game of an alien civilization—in which case any other people in the hot tub with him would also not be “real” but instead be simulated people like himself. Yes, Musk has banned such discussions from hot tubs. “That really kills the magic,” he explains. “It’s not the sexiest conversation.”[i]
If that sounds like silly talk, the modern scientific establishment firmly disagrees with you. Musk himself says that the odds that hot tubs and people really are “real”—the odds that we are not living in simulated world—are “one in billions.” And many scientists agree, at least in principle.
Astrophysicist Neil deGrasse Tyson, who directs the Hayden Planetarium in New York city and has won several awards, including the Public Welfare Medal from the U.S. National Academy of Sciences for his “extraordinary role in exciting the public about the wonders of science”, says it is “very likely” that the universe is a simulation.[ii] Others, like David Kipping, assistant professor of astronomy at Columbia University, say the odds are more like 50-50.[iii]
Why do they say this? Because if it could happen, then in a 14-billion-year-old universe the chances are very high that it already has happened.
Just let that sink in for a minute. These are some of the brightest minds on our planet saying that it’s very possible or very likely that you and I are computer programs existing inside an alien civilization’s video game. Beyond that, many people think that we ourselves have already created sentient (i.e., self-aware) computer programs. This past summer Google suspended an engineer for breaching the company’s confidentiality policy by publicly saying that their chatbot, LaMDA, had become sentient.[iv]
Now I don’t know anything about computer science, but I still want to try to engage with the materialists’ line of reasoning here. After seeing the shenanigans they have used validate their presuppositions, thus taking language and math completely for granted, I’m skeptical about their assumption that a computer program could ever be conscious.
As it turns out, they’re going to say consciousness is an instinct.
They have no other choice. After all, we’ve already observed that words are nonphysical phenomena that precede and saturate all of creation. How in the world could any physical circuitry—whether biological circuitry or computer circuitry—ever interact with something nonphysical?
We’ll look at their arguments for how this is done in the next chapter. But first let’s put it into context by considering who or what can perceive and use information. Do we have any reason to believe that a computer could ever comprehend the meaning behind the medium any better than a dictionary could? Any better than a rock could?
Let’s slow down and consider this step-by-step. Let’s just start with an abacus.
Can an abacus do arithmetic?
Can an abacus do arithmetic? Does it even know what numbers are? Of course not. It’s just a tool that we use to keep track of the data that we perceive.
What if we use a stick to move the beads on the abacus? Or what if we hook up a system of levers attached to symbol keys so that, for example, if we push the symbol 5 twice, we get ten beads on the abacus? Now would our abacus be doing arithmetic? Of course not. We are still doing the math and using the abacus to keep track of the data, just like we can use our fingers or our smartphones to keep track of data.
Okay, let’s go down to the river and build a water wheel (a task which would involve a good deal of math) and then use that to turn a coil of copper wire between a couple of magnets, thus producing an electric current. Then, instead of using levers and abacus beads, we could use electricity and LED lights. Okay, then if we push the symbol for five twice, the symbol for ten lights up. Now has our contraption done arithmetic?
Of course not! It is still, in principle, no different than the abacus—just a tool that we use to process the information that we perceive. And keep in mind that, starting from absolute scratch, it would take hundreds of thousands of man-hours to build such a contraption—hours of people thinking, doing arithmetic, creating plans to mine ore and to fashion lights and to build parts.
Well, what if, instead of pushing buttons, we program a recorder to match certain sound waves with certain electric levers? That would be the same, in principle, as the system which matched certain symbol keys with certain levers…
Let’s just go ahead and build a robot that can walk into a kitchen in Texas and record that there are three apples on the table and four in the refrigerator. Then the robot calls someone in Spain and declares, “Hay tres manzanas en la mesa y cuatro manzanas en el refrigerador, ¡así que tienes siete hermosas, jugosas y deliciosas manzanas rojas! ¡Qué maravilloso! ¿Quieres que te haga una tarta de manzana?”[v] (It could have said the same thing in 1,263 other languages.) Now would we finally have a machine that can perceive numbers and do arithmetic?
Of course not. We would have a machine that certainly appears to comprehend arithmetic—just like we have movies with dinosaurs that certainly appear to be real, just like we have smartphones that project beautiful singing. But no matter how powerful artificial intelligence and big data become, any appearance of intelligence will always, only be artificial. As Robert J. Marks, Electrical & Computer Engineering Professor at Baylor University, put it, “Big data is ignorant of meaning.”[vi]
Keep in mind that to build a robot that can send electromagnetic waves to an orbiting satellite—that requires so many years of competitive research and development (including the discovery of all sorts of complex mathematics) that we are surely into the billions of man-hours now. We can be impressed by movies that cost hundreds of millions to make, and we can be impressed by robots. But don’t let the CGI fool you. The dinosaurs cannot bite, and the robots cannot count. As Neurosurgeon Michael Egnor put it:
The hallmark of human thought is meaning, and the hallmark of computation is indifference to meaning. That is, in fact, what makes thought so remarkable and also what makes computation so useful. You can think about anything, and you can use the same computer to express your entire range of thoughts because computation is blind to meaning. Thought is not merely not computation. Thought is the antithesis of computation. Thought is precisely what computation is not. Thought is intentional. Computation is not intentional.[vii]
In short, we cannot articulate a theory—much less test a theory—as to how any machine could even count to ten. That’s why computers cannot do anything creative. As Satya Nedella, CEO of Microsoft, put it, “One of the most coveted human skills is creativity, and this won’t change. Machines will continue to enrich and augment our creativity.”[viii]
To the point, Marks says that although computers can be programmed to simulate art, they cannot actually create art. Thus, for example, if you program a computer to produce music with the status quo of Johann Sebastian Bach, it can shuffle data around and output music that simulates Bach, but only Bach and never Stevie Wonder or BTS. Or if you input patterns for a Leonardo DaVinci painting, it can simulate a DaVinci painting, but it will never output Pablo Picasso or Stephen Hillenberg.
They can only take the data which they’ve been presented and interpolate. They can’t, if you will, think outside of the box. Creativity is something which computers will never accomplish.[ix]
They have no perception of meaning, much less creativity or art. Nor, for that matter, do they perceive relationships. Marks and his team applied evolutionary computing to swarm intelligence—the kind of programming that reflects social insects like ants and bees. They set up a predator-prey scenario with dweebs and bullies. The bullies would chase around and destroy the dweebs, thus maximizing the lifespan of the dweeb colony.
The result that we got was astonishing and very surprising. What happened was that there was self-sacrifice that the dweebs learned. One dweeb would run around the playground and be chased by the bullies and self-sacrifice himself…They would kill the dweeb and then other dweebs would come out and self-sacrifice themselves. But by using up all of the time in order to survive, the colony of dweebs survived for a very, very long time. Which was exactly what we told it to do.
Once we looked at that we were surprised by the results, but we looked back at the code and said, ‘Yeah, of these thousands and millions of solutions that we proposed, yeah we see how this one gave us the surprise. So surprise can’t be confused with creativity. If the surprise is something within the domain of what the programmer decided to program, then it really isn’t creativity. The program has just found one of those millions of solutions that worked really good possibly in a surprising manner.[x]
Dolphins and many other species demonstrate the same self-sacrificing behavior as the dweebs in Marks’ program, and many people see it as evidence of self-awareness. Yet, again, there is plenty of room for skepticism that, like the computers, the animals may only be doing what they were programmed to do.
We’ll consider animals more later. For now, let us recognize that circuitry cannot perceive meaning—whether of the symbols 10, or of art, or of relationships—for the simple reason that meaning is a nonphysical phenomenon.
Of course,this conclusion is unacceptable to the modern scientific establishment. They take it for granted that consciousness and the ability to communicate must ultimately be a mechanical ability that emerged through evolution.
How could that be possible?
JUST TAKE A LEAP OF FAITH
Many materialists declare that if a machine appears to be able to do something, then it is in fact doing it. As inventor and former Google technologist Ray Kurzweil put it, “My own leap of faith is this: Once machines do succeed in being convincing when they speak of their qualia and conscious experiences, they will indeed constitute conscious persons.”[xi] (Qualia here refers to the perception of information, which is defined as “a quality of matter”.)
Leap of faith? If a robot appears to be conscious then it is, literally, a conscious person? If a machine appears to be able to perceive immaterial numbers, then it does in fact perceive immaterial numbers? That makes about as much sense as when theoretical physicist Carlo Rovelli said he didn’t know the difference between what is real and what is an LSD-induced hallucination.
That would be like biting into a vegetarian Impossible Whopper and then declaring that because it looks like beef, feels like beef, smells and tastes like beef, that it’s beef. Perhaps a mysterious interspecies phase-transition emerged from the intrinsic interplay of some highly differentiated quantum states, causing the soybean cells to transmogrify into cow cells.
Or we could conclude that it’s an excellent simulation of beef but that it’s still a plant-based burger. Likewise, no deep-learning, supercomputing, nuclear-powered robot will ever be able to count to ten. Yes, we can use robots and supercomputers to discover information that we could not otherwise discover, but that’s no different from using a telescope to see information that we could not otherwise see, or using an ultra-deepwater, dynamic-positioning, semi-submersible drilling rig to extract information that we could not otherwise extract. We still have zero reason to assume that any of those machines perceive arithmetic any more than the internet perceives English, any more than a calculus textbook perceives calculus, any more than my spleen perceives insanity.
And yet Darwinists insist on assuming it anyway. Circuitry simply must be able to count, and mathematics must have evolved with that circuitry. End of discussion. Michio Kaku, a theoretical physics professor at the City College and City University of New York, argues that even a thermostat can be said to have a degree of consciousness.
The simplest level of consciousness is a thermostat. It automatically turns on an air conditioner or heater to adjust the temperature in a room, without any help. The key is a feedback loop that turns on a switch if the temperature gets too hot or cold.[xii]
The key is a feedback loop? If you wonder why I’m asking odd questions, here is a prime example. Just stop and think: in all seriousness, what do feedback loops look, sound, feel, taste, or smell like? How could any circuitry ever perceive, much less use, a feedback loop?
THE COHERENT ALTERNATIVE
Consider the possibility that thermostats are not in fact conscious and, moreover, that it is a big deal to even be able to read a thermostat. Who is it that designed it and programmed feedback loops into it? For that matter, what is using our eyes to read it? What is directing the nerve cells in our brains?
No matter how far we calculate—to the mercury vessel, to the scale of the thermometer, to the retina, or into the brain, at some time we must say: and this is perceived by the observer. That is, we must always divide the world into two parts, the one being the observed system, the other the observer. In the former we can follow up all the physical processes (in principle at least) arbitrarily precisely. In the latter, this is meaningless.[xiii]
To talk about the observer in terms of physical processes—i.e. to suggest that we are our physical brains—is meaningless?! That was written in 1932 by quantum physicist John von Neumann in a textbook that to this day provides the orthodox understanding of quantum mechanics. He and his colleagues concluded that the observer (i.e., the scientist in the laboratory) is not a physical brain but is an “extra-physical” entity. “It is inherently entirely correct that the measurement or the related process of the subjective perception is a new entity relative to the physical environment and is not reducible to the latter.”[xiv]
Von Neumann and his colleagues did not set out to overthrow materialism or to understand the mind-over-matter mystery. (Ninety-five percent of his textbook is complex math about wave functions.) They were just following the science. And they concluded that the scientists themselves were extra-physical entities—a.k.a. souls.
They confirmed what we intuitively know to be true: the reason that we humans are very familiar with immaterial phenomena like numbers and equations is because we have immaterial souls. We extra-physical entities are using our brains in the same way that we use our smartphones and our toaster ovens and our pick-up trucks.
There is no coherent alternative. Henry Stapp, a highly published physicist at the Lawrence Berkeley National Laboratory who worked with giants like Werner Heisenberg, Wolfgang Pauli, and J.A. Wheeler, explained it this way in 2017:
Given this recognized major importance of the mind-brain problem, you might think that the most up-to-date, powerful, and appropriate scientific theories would be brought to bear upon it. But just the opposite is true! Most neuro-scientific studies of this problem are based on the precepts of nineteenth century classical physics, which are known to be fundamentally false. Most neuroscientists follow the recommendation of DNA co-discoverer Francis Crick, and steadfastly pursue what philosopher of science Sir Karl Popper called “Promissory Materialism”.[xv]
Are we sure about this? Is there not even a theory as to how the brain can perceive the meaning of the symbols 10 any better than an abacus can? Let’s continue to take this step by step.
Can the human brain do arithmetic?
Can the human brain perceive the meaning behind the media? Well, if we followed the same logic that we used above for computers, we would invariably come to the same conclusion: there is no way the three-pound organ in our skulls could perceive the meaning behind the medium—regardless of whether that medium is abacus beads, knots in a rope, or black symbols on paper. It could not use the five senses to perceive something that cannot be directly or indirectly seen, heard, felt, tasted, or smelled.
Let’s consider two types of evidence—circumstantial and experimental.
When the circumstances of a person’s brain change—such as due to injury—do their mind and personality change as well? If yes, many argue that this is evidence that we are our brains and that our brains think and communicate.
Professor Michael S. Gazzaniga argues for this view. Professor of psychology at the University of California, Santa Barbara, and head of the SAGE Center for the Study of the Mind, Gazzaniga has for decades led pioneering studies in how the two halves of the brain interact. When he explains to non-scientists how the two halves of the brain interact, he gives us a peak into its breathtaking complexity.
But when it comes to explaining consciousness, as with everyone else, Gazzaniga begins with the materialistic presupposition. In his recent book, The Consciousness Instinct, he joined the trend in scientific studies of consciousness by seeking “to examine how matter makes minds”.[xvi] He says that the most important science to consider when studying the mind is the study of split-brain patients—those people whose two brain halves have been severed so that the two sides have no connection with each other. When this happens, each side continues to function as it were unaware that the other half ever existed at all. For example, if the left side of the brain perceives something, the right side is completely unaware of it—and vice versa.
Gazzaniga says this means either that a person experiences simultaneously different perceptions, or that a person is literally split into two different minds inhabiting the same body. He is in the later camp.
The new “yous” have two minds with two completely separate pools of perceptual and cognitive information. It is just that only one of the minds can readily speak. The other initially cannot. Perhaps, after many years, it will be able to produce a few words. More crazy yet, in the early months after surgery, before the two hemispheres get used to sharing a single body, one can observe them in a tug-of-war.[xvii]
He compares the experience of split-brain patients to a game of Whac-a-Mole—the arcade game where a mole puppet repeatedly pops up through different holes and the player needs to whack the mole with a mallet before it disappears again.
We are aware, which is to say conscious, of the processed information as it pops up after being processed in a particular hemisphere. But is that because each neural process activates a “make-it-conscious network” (which would have to be present in each hemisphere), or does each process in and of itself possess the neural capacity to appear conscious? I am in the latter camp.[xviii]
This is an example of the type of circumstantial evidence that Darwinists will offer to support their assumptions. It’s an interesting argument and all the research is truly fascinating. But there is plenty of room for skepticism and thus for choosing the other school of thought, which says that a split-brain patient struggles to use a damaged brain—or, as Gazzaniga puts it, “experiences simultaneously different perceptions.”
That is to say that the observations would also be consistent with the notion that a nonphysical consciousness (a soul) could be doing its best to manage a damaged brain—to play the game of Whac-a-Mole—similar perhaps to how a driver tries his best to operate a damaged car. I once had a 2001 Honda Accord sit in my driveway for months because several of the transmission sensors weren’t working. Although it had both a good transmission and a good engine, due to the bad sensors and the even worse mechanic (that would be me) the two do not communicate well with each other at all. So whenever I do drive the contraption it would start shifting randomly, at which time I was often heard muttering, “This…thing has a mind of its own!” Of course, the human brain is about a billion times more complex than a 2001 Honda Accord.
Now that’s an exaggerated analogy. In fact, according to Dr. Michael Egnor, a neurosurgeon and professor of neurological surgery and pediatrics at Stony Brook University, you could easily work and interact with a split-brain patient without ever knowing or suspecting that anything is different about them. He says that the changes resulting from split-brain surgery are so subtle that most people wouldn’t notice them. So he’s in the opposing camp from Gazzaniga, arguing that split-brain patients maintain a unity of mind even as they struggle to manage their handicap. “The post-operative patients experienced peculiar perceptual and behavioral changes, but they retained unity of personal identity—a unified intellect and will.”[xix]
Egnor says the circumstantial evidence not only points to a unity of mind in split-brain patients, but also points to the mind being an immaterial phenomenon.
I have scores of patients who are missing large areas of their brains, yet who have quite good minds. I have a patient born with two-thirds of her brain absent. She’s a normal junior high kid who loves to play soccer. Another patient, missing a similar amount of brain tissue, is an accomplished musician with a master’s degree in English.[xx]
Even if they disagree about what the mind is, there is one thing they can all—materialists and non-materialists alike—can agree on: a person’s conscious mind pervades all parts of the brain. “There is not one centralized system working to produce the grand magic of conscious experience,” Gazzaniga says. “It is everywhere, and you can’t seem to stamp it out, not even with a wide-ranging brain disease like Alzheimer’s.”[xxi]
In the end, although the circumstantial evidence might be debatable, the burden of proof still lies with the materialists to explain how matter makes mind. But Gazzaniga doesn’t actually try to offer any explanation for consciousness at all. Instead, like so many other materialists, he says it is just an instinct that happens for free. He admits (much to his credibility) that this is not a scientific explanation and that he is still wrestling with the mystery of why we are able to use our instincts (i.e. use our minds).
An instinct calls upon a physical structure to function. Yet using the structure calls upon an “aptitude,” which apparently comes along for free. Finding the physical correlates of an instinct’s physical apparatus is doable, but how do we learn how it comes to be used? Does it just happen? Not a very scientific answer.[xxii]
Once Gazzaniga accepts that such aptitude “comes along for free”, then other gargantuan abilities, such as free will, can also be taken for granted. For along with consciousness comes our linguistic ability and our sociality—all as instincts! With the last sentence of his book, he repeats the establishments presupposition that consciousness is simply included with evolution the way grits are included with any proper southern meal. “Grits come with the order, and so does what we call consciousness.”[xxiii]
Absent any evidence, that’s the same conclusion that physicist Sean Carroll, Professor of Natural Philosophy at Johns Hopkins University, comes to—that consciousness just comes along for free. Like Gazzaniga, he stacks up several layers of abstraction in order to explain consciousness. In 2016 he wrote that it is “a complex interplay of many processes acting on multiple levels.”[xxv] You’re not a soul; you’re an interplay of many processes. In his book, The Big Picture, he concluded that we just have to take consciousness for granted as intrinsic because some complex things “just come into being”:
Consciousness seems to be an intrinsically collective phenomenon, a way of talking about the behavior of complex systems with the capacity for representing themselves and the world within their inner states. Just because it is here full-blown in our contemporary universe doesn’t mean that there was always some trace of it from the very start. Some things just come into being as the universe evolves and entropy and complexity grow: galaxies, planets, organisms, consciousness.[xxvi]
What?! Just follow the grammar and try to simplify that first sentence: consciousness seems to be a way of talking about the behavior of systems with inner states. There’s that word state again. Is a state, literally, a clump of neurons inside a skull?
Well, Carroll claims there is no room in the brain for a soul to do anything which science does not already explain. But as we will see later, there is not only ample room for a nonphysical/immaterial soul but also overwhelming evidence for one.
In treating brain diseases and injuries, Egnor says scientist have discovered compelling evidence in support of an immaterial mind. One leading neurosurgeon, Wilder Penfield (1891-1976), started out as a materialists but, after working with thousands of patients, switched to believing that we have a soul.
The first piece of evidence that Penfield saw was the fact that, that although he could surgically stimulate many physical effects on his patients, he could never cause rational responses. Neurosurgeons often operate on their patients while they are awake because we don’t sense pain on our brains. So as Penfield was operating, he would stimulate parts of a patient’s brain that would cause them to move their arm, feel a tingling in their leg, see a flash of light, etc.
But Penfield noted that, in probably hundreds of thousands of different individual stimulations, he never once stimulated the power of reason. He never stimulated the intellect. He never stimulated a person to do calculus or to think of an abstract concept like justice or mercy.
All the stimulations were concrete things: Move your arm or feel a tingling or even a concrete memory, like you remember your grandmother’s face or something. But there was never any abstract thought stimulated.[xxvii]
Similarly, Penfield noted that although many of his patients had a variety of physical seizures, no one ever had an intellectual seizure.
If arithmetic and logic and all that abstract thought come from the brain, every once in a while you ought to get a seizure that makes it happen. So [Penfield] asked rhetorically, why are there no intellectual seizures? His answer was, because the intellect doesn’t come from the brain.[xxviii]
Penfield also noted that he was never able to stimulate a person’s will. For example, by touching part of their brain with a surgical instrument he could make their arm move. And then he would ask the patient to move their arm themselves. He would alternate back and forth between stimulating arm movement and asking the patient to move their arm. But Penfield realized that he could never fool the patient. They always knew whether it was Penfield or themselves that was moving their arm. “So he said the will was not something he could stimulate, meaning it was not material.”[xxix]
In similar fashion, Jeffrey Schwartz, a research scientist at UCLA School of Medicine, says that treating people with obsessive-compulsive disorder means that they have to learn to take charge of their own brain.
By training people, literally training them, to reinterpret the feelings that you need to check, or that something is dirty—reinterpret it as a false message from your brain, people could in fact learn to understand that this is not me. This is a false message from my brain. And when they did that their OCD improved and their brain changed. That’s the part where you really get the leverage to say, ‘You’re not just your brain because choices you make can actually change how your brain works.’[xxx]
Is there any experimental evidence to the contrary? Neuroscientist Benjamin Libet (1916-2007) tried to find some. He explored what neuroscientists call a readiness potential, which is activity in the brain that precedes muscle movement. Since this brain activity occurs a split second prior to a person making a decision, many have argued that this provided evidence that free will is an illusion. We just think we’re making decisions, but in fact our brains have already decided something for us. Egnor is incredulous at how this interpretation completely contradicts Libet’s own conclusions.
Libet himself explicitly endorsed the reality of free will, emphatically he endorsed the reality of free will. And Libet pointed out that his research unequivocally supports the reality of libertarian free will. But his experiments are described very often both in the scientific literature and in the popular press as supportive of materialism—which is something that they don’t support and something that Libet made very clear was not his conclusion.[xxxi]
Now, to be sure, Libet did follow the naturalistic trend of presupposing materialism. He was indeed searching for experimental evidence that the brain produces consciousness. But the point is that, contrary to popular reports, he did not find any evidence that free will is an illusion. “My conclusion about free will,” he wrote, “one genuinely free in the non-determined sense, is then that its existence is at least as good, if not a better, scientific option than is its denial by determinist theory.”[xxxii]
Aside from misrepresentations of Libet’s work, I’m afraid I actually could not find any experimental evidence for the materialistic view of consciousness. This could be because they start with a presupposition and then fall victim to what scientists call confirmation bias—not being able to see evidence that contradicts your view. (Or maybe I did a poor job of investigating because I myself was under the spell of confirmation bias.)
But the excellent news is that all the evidence supports what we intuitively and logically know to be true: of course the brain cannot perceive immaterial phenomena. We do perceive it, but we are not our brains. We are just as nonphysical as is information (the meaning behind the media). Now in the next chapter we will come back to the question of how a nonphysical mind could control the physical brain. But before we go there, let’s put this conclusion into context by considering whether other animals can perceive meaning. If we nonphysical minds can use our brains, in principle, the same way that we use our laptops and our rocket ships, do other animals use their brains in the same way? Do other animals perceive meaning?
What about animals?
When animals exchange information with each other, are they communicating in the way that we humans communicate? Or, instead, are they simply machines that process information in the way a laptop or a satellite processes information? Do animals perceive information any better than robots do?
Scientists have concluded that there is a categorical difference between human language and the method of communication of any other species. As evolutionary biologist Richard Dawkins, an emeritus fellow of New College, Oxford, and the University of Oxford’s Professor for Public Understanding of Science from 1995 until 2008, put it:
Humans are unique in many ways. Perhaps our most obviously unique feature is language. Whereas eyes have evolved between forty and sixty times independently around the animal kingdom, language has evolved only once.[xxxiii]
As our use of language is unique, so also may be our use of mathematics. Although some species may at first appear to comprehend basic math, upon further study scientists have often concluded that they are simply doing what they are programmed to do. Some claim that Ravens, for example, comprehend arithmetic. However, Thomas Suddendorf, professor of psychology at The University of Queensland, Australia, says the evidence for that is far from conclusive. He said the studies of Ravens have followed training through reinforcement, and that human cognition is unique:
Yet the achievements of the ravens, as well as cognitive feats of apes in other studies, can be explained in simpler ways. It turns out that animal and human cognition, though similar in many respects, differ in two profound dimensions. One is the ability to form nested scenarios, an inner theater of the mind that allows us to envision and mentally manipulate many possible situations and anticipate different outcomes. The second is our drive to exchange our thoughts with others. Taken together, the emergence of these two characteristics transformed the human mind and set us on a world-changing path.[xxxiv]
Based on our current scientific understanding, we could skeptically conclude that animals are programmed to adapt to their environment in the same way that we can program Mars robots and other machines to adapt to their environment. By contrast, from infancy humans are different, for human infants display unique reasoning. Justin Halberda, associate professor of psychology and brain sciences at Johns Hopkins University, says that even in infancy humans have a unique ability to reason. Referring to research done by other scientists (Cesana-Arlotti et al.) he elaborates on this mystery:
The careful crafting of stimuli and clever analyses of infants’ spontaneous looking behavior by Cesana-Arlotti et al. show us that infants have the capacity to reason by process of elimination. By contrast, whereas nonhuman animals such as dogs facing similar situations of ambiguity may ultimately form the right conclusion, they appear to arrive at this hunching using an associative rather than logical process.[xxxv]
In other words, whereas dogs and other animals may appear to make logical deductions, the evidence actually suggests that they are instead running on instinct when adapting to different situations.
As another example, consider ants, which can have highly organized caste systems and divisions of labor. Although we humans look at an ant colony and identify a governmental “queen”, any appearance of centralized authority or control is an illusion, according to Bert Hölldobler, Professor of Life Sciences at Arizon State University and former professor of zoology at Harvard University, and Edward O. Wilson, a Harvard professor for nearly five decades. They say the best way to understand social insects (ants, termites, bees, and wasps) is to see them not as societies but as computer programs. Instead of operating according to a central command, there is self-organization in which each insect acts independently of the others
To add one last concept from computer science, social insect workers are cellular automata, defined as agents programmed to function interactively as a higher-level system. They have this trait because their colony as a whole lacks command and control by a still higher-level system. It therefore must be self-organized. Through the combined senses and brains of its members, the colony operates as an information-processing system. The environment challenges it with problems: the workers must locate an adequate nest site, find the right food items and bring them home, establish home ranges and territories, defend against enemies, and care for the helpless young. These disjunct problems press on the colony at almost all times. The algorithms of individual development and behavior contain the solutions to all of them. With algorithms, the colony masters the problems natural selection has designed it to solve. The requisite information is distributed among the colony members. Thus, a distributed colony intelligence is created greater than the intelligence of any one of the members, sustained by the incessant pooling of information through communication.[xxxvi]
They give the example of the fungus-growing termites of Africa (which are similar to the leaf-cutting farmer ants of North America). They build massive, intricately designed nests with an architecture that regulates both the temperature and the composition of the interior air. Through precise engineering, the entire nest functions as an air conditioning system that keeps the central living area within a degree of 30oC and a carbon dioxide concentration between 2.6 and 2.8 percent.
While such emergent properties are marvelous to behold, their engineering is not intrinsically mysterious. The extremes of higher-level traits may at first appear to have a life of their own, one too complex or fragile to be reduced to their basic elements and processes by deductive reasoning and experiment. But such separatist holism is in our opinion a delusion, the result of still insufficient knowledge about the working parts and processes. An important merit of insect sociobiology, as opposed to vertebrate and especially human sociobiology, is that the colony organizations it addresses contain a large array of emergent phenomena simple enough to be explained by scaling up from the behavior of the constituent elements. This is the advantage provided us by the small brains of the social insects and the general quick and simple decisions they must make with limited algorithms.[xxxvii]
The point, again, is that animals are most likely only doing what they were programmed to do. They are not actually using logic or reason. It is entirely possible that animals are not sentient beings, having no more ability to comprehend arithmetic than does a calculator or a math textbook, and having no more ability to comprehend the meaning behind the medium than does a robot or perhaps a communications satellite. And whereas all the materialistic arguments for why we are our brains fall apart when applied to humans, they may work quite well in explaining the behavior of all other species.
What about dogs and cats and horses?
This understanding of things might be particularly hard to stomach if we think of our pets. So let’s cut to the chase: is it possible that dogs are simply machines and that they do not have conscious minds like we do? What would that mean?!
Well, consider what it is that dogs are really good at compared to other animals. For example, spiders are very good at building webs to catch prey. That’s their specialty. That’s how they survive and thrive. Is it possible that spiders, like ants, are not sentient beings? We know they can’t communicate like Charlotte in Charlotte’s Web, but is it possible that they don’t even perceive basic web geometry? Could they simply be amazing little machines?
Or consider beavers: their specialty is building dams. Is it possible that they don’t perceive engineering any more than a spider perceives geometry, or any more than an ant perceives colonial government or than a laptop perceives calculus? (If you play a recording of running water next to a beaver, it will immediately start digging.)
Now back to dogs: what is their specialty? They cannot build webs or dams. For that matter, they cannot survive in the wild on their own. (Urban areas are not wild so they don’t count.) They don’t know how to hunt in packs or to build dens for the winter. So what is the one amazingly complex thing they can do—as amazing as spinning a web or building a dam or running in a pack or building an elaborate nest with a precisely controlled air-conditioning system? What is the dog’s specialty?
They can look people in the eye and grovel. That’s how we humans have re-engineered their social programming. By contrast, their ancestors, gray wolves, can never do that. If a wolf ever looks you in the eye, be very afraid. Even if a wolf spends 24/7 with a (non-bathing!) human from the day of its birth, it will not bond with that human. Wolves won’t show the slightest concern for their human caretakers. As New York Times reporter James Gorman put it:
If one of the people who has bottle-fed and mothered the wolves practically since birth is injured or feels sick, she won’t enter their pen to prevent a predatory reaction. No one will run to make one of these wolves chase him for fun. No one will pretend to chase the wolf. Every experienced wolf caretaker will stay alert. Because if there’s one thing all wolf and dog specialists I’ve talked to over the years agree on, it is this: No matter how you raise a wolf, you can’t turn it into a dog.[xxxviii]
By contrast, Gorman says, “Even street dogs that have had some contact with people at the right time may still be friendly.”[xxxix] That’s what we have bred them to do.
Michael Behe, professor of biochemistry at Lehigh University in Pennsylvania, says that dogs are broken wolves. That is to say that the mutations that turned a wolf into a dog were largely negative. Even changes that we might see as advantageous, such as the strength of a bulldog, result from destructive mutations.
Most of them break or damage preexisting genes. For example, increased muscle mass in some breeds derives from degradation of a myostatin gene. We also know what mutations cause a yellow coat [loss of FCT of melanocortin 1 receptor], short tails [loss of FCT of the protein coded by the mutated T gene], even the lovable friendliness of dogs towards humans [disruption of genes GTF21 & GTF2IRD1], compared to less friendly wolves.[xl]
Behe said its highly unlikely that the broken genes could ever un-break back to their original working state. So dogs’ only hope is that we humans will always be glad to have them around.
Looking at a dog, it might suddenly feel terribly counterintuitive to think of him as a machine. So it might help again to remember how counterintuitive it would have been for a tenth-century farmer to look at the sun and believe that he was orbiting it on a spinning sphere. Or consider how counterintuitive it was for Einstein to look at a glass of water realize that it had about billion dollars’ worth (in today’s currency) of kilowatt energy in it. Or remember how we talked about how you can watch The Lord of the Rings with a movie camera sitting next to you on the couch and realize that that camera not only doesn’t see the drama, it also doesn’t even see the colors—any more than a book comprehends the meaning of words like blue and gray. Cameras can’t see any more than radios can sing, any more than ants can vote, any more than a communication satellite comprehends the meaning of the word hello, any more than Voyager comprehends the meaning of the word gravity, any more than Perseverance (the robot we’re sending to Mars to look for signs of life) comprehends the meaning of the word Mars.
Now aside from being counterintuitive, it might also feel terribly lonely to look animals this way. So consider thinking about wolves and horses the same way you might think about Tom Sawyer, Gandalf, Anne of Green Gables, or Josephine “Jo” March. Those are non-sentient characters that people love to turn to when they’re alone. Those characters can comfort, educate, and inspire us. They might even nurture us in deeper ways that we cannot articulate. In similar fashion, consider the possibility that the Author of wolves and horses and butterflies wanted to bless us with inspiration, hope, comfort, and joy.
Who is an author?
Considering authorship might be the best way to put all this into perspective. Thus, when we ask, “Who perceives words?” the clearest answer might be “that which can use words creatively”.
So do we know whether any other species are creative? Do birds ever author new songs, or can we skeptically conclude that they are only singing what they are programmed to sing? Do beavers ever author new uses for their engineering skills, or can we skeptically conclude that they are only doing what they are programmed to do? Do bacteria ever find creative uses for their protein-building skills? (Keep in mind that the process for constructing proteins is much more complex than the process of building beaver dams.)
Another possible sign of creativity is tool use. Yet again, there is room for skepticism, according to Robert Shumaker, biologist and coordinator of the Orangutan Language Project at the National Zoo’s Think Tank. He gives the example of the bolas spider, which is named after the throwing weapon used by South American gauchos. Bolas spiders make a ball from silk and then throw it at prey to catch them.
When an insect flies by, they throw it and it attaches to the insect because it’s sticky and they reel them in. It’s very complex. Very impressive. Very dramatic. But all available information tells us that it’s completely controlled from this animal’s genetic history.[xli]
Similarly, Thomas Suddendorf says that although many animals use tools, and that some even make tools, there is no evidence that the animals recognize or are aware of what they’re doing.
Chimpanzees in Senegal have been reported to make rudimentary spears that they thrust into tree hollows to kill bush babies. But there is as yet no observation that they practice thrusting, let alone throwing. Unlike humans, they could not benefit from the invention of a spear thrower. You can safely give them one of yours; they would not use it as we do.[xlii]
At the end of the day, our creative abilities are mired in mystery says linguist Noam Chomsky, Professor of Linguistics (Emeritus) at Massachusetts Institute of Technology.
Language is a process of free creation; its laws and principles are fixed, but the manner in which the principles of generation are used is free and infinitely varied. Even the interpretation and use of words involves a process of free creation.[xliii]
What would it mean to say that language—and the ability to use language creatively—evolved? Is that any different from saying that dictionaries evolved or that mathematics evolved? Would we humans be the first known ones to comprehend all these explanations?
Yes, according to evolutionary biologist Richard Dawkins. “Human thoughts and emotions emerge from exceedingly complex interconnections of physical entities within the brain.”[xliv] He says that our own intelligence evolved similar to how modern-day computers have evolved.
Natural selection of selfish genes gave us big brains which were originally useful for survival in a purely utilitarian sense. Once those big brains, with their linguistic and other capacities, were in place, there is no contradiction at all in saying that they took off in wholly new ‘emergent’ directions, including directions opposed to the interests of selfish genes. There is nothing self-contradictory about emergent properties. Electronic computers, conceived as calculating machines, emerge as word processors, chess players, encyclopedias, telephone switchboards, even, I regret to say, electronic horoscopes. No fundamental contradictions are there to ring philosophical alarm bells. Nor in the statement that our brains have overtaken, even overreached, their Darwinian provenance.[xlv]
Now, speaking of words, Dawkins’ comparison of the emergence of “human thoughts and emotions” with the emergence of computers here is very common, but it is also thoroughly self-contradictory. We talk easily about the evolution of lots of things—the computer, the automobile, the assault rifle, the fajita, etc. However, these are all examples of the evolution of design at the hands of rational, creative people. Improvements and developments—such as from calculating machines to telephone switchboards and chess-playing supercomputers—“emerge” as a result of the intentional, competitive mental effort of intelligent designers.
But when Dawkins et al use the term evolution, there is a sudden semantic shift: rationality, creativity, and intentionality are all explicitly excluded from the emergent process, to be replaced by randomness, chance and “Natural Law”. For the Darwinists want to have their intelligently designed cake and eat it, too. “In the case of living machinery,” says Dawkins, “the ‘designer’ is unconscious natural selection, the blind watchmaker.”[xlvi] That quote comes from a book he wrote about our cosmic creator, titled The Blind Watchmaker.
Just stop and consider what this brilliant scientist wants students to simply take for granted. For starters, he wants to take Natural Laws for granted—laws that are so deep and profound and complex that the brightest minds can spend a lifetime fixating on them. Yet, according to Dawkins, the author, or “designer”, of such laws is a Cosmic Blind Watchmaker.
Next, consider how important the words randomness and chance are to Dawkins’ explanation. Although such words are absolutely essential to comprehending evolutionary theory, they are only coherent in the context of a dictionary of at least, say, 5000 words, and they only have meaning in a much broader context of non-randomness and precise order—just like the word three only has meaning in the context of a number line, or just like you can only have the chance of drawing a lucky lottery ticket if there are millions of other people contributing to that lottery within a much larger, well-ordered, rational, creative, non-random economy.
Yet again, the author of this dictionary, the “designer” of all this order, is a Cosmic Blind Watchmaker who is not only blind but also deaf, mute, and as senseless as a block of wood. And the Darwinists are assuming the authority to discover and reveal to the world who this Blind Watchmaker is, and to reveal what the words in his dictionary mean, and to reveal his/her/its explanations for all of existence.
On the one hand, that’s a lot to take for granted. On the other hand, we have seen this many, many times before all throughout human history. For it is exactly like those people who assume the authority to carve a senseless block of wood into a mysterious image, plate it with gold, embed it with jewels, set it high on a pedestal, and then declare unctuously unto mankind, “Behold your Creator!”
This is the norm for the modern scientific establishment—whether for physicists like Hawking or for biologists like Dawkins. As the editors of Evolution News & Science Today put it:
A minimal cell packs a ton of functional information. How did it get there? Darwinians, who wish to account for all of life without design, are obligated to believe that information creates itself. In the past they tended to be more reticent about the problem, realizing that it was a tremendous challenge even to get to a theoretical replicator. Lately, some of them are employing a bolder tactic: simply assert that information creates itself.[xlvii]
Textbooks that write themselves?
The question remains: how could an immaterial mind possibly author the actions of a material brain? How could something nonphysical ever “push” something physical—even if that nonphysical thing was a tiny neuron? Behe put it this way:
At the most basic level of matter, the quantum level, events are understood by most physicists to be physically uncaused. Perhaps there are nonphysical events that can affect quantum ones in a purposeful way, in turn affecting the brain.[xlviii]
That’s what we will look at next, when we ask, “How do we perceive words?”
[v] “There are three apples on the table and four apples in the refrigerator, so you have seven beautiful, juicy, red delicious apples! How wonderful! Do you want me to make you an apple pie?”
[vii] Michael Egnor, “Neurosurgeon Outlines Why Machines Can’t Think,” (Mind Matters, July 17, 2018) https://mindmatters.ai/2018/07/neurosurgeon-outlines-why-machines-cant-think/
[xi] Ray Kurzweil. How to Create a Mind: The Secret of Human Though Revealed. Ray Kurzweil. (New York: Penguin Books, 2012.) p. 209-210.
[xii] Michio Kaku, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind (New York: Doubleday, 2014), Kindle edition.
[xiii] John von Neumann, Mathematical Foundations of Quantum Mechanics, published 1932, translated from the German edition by Robert T. Beyer in 1949 (Princeton, NJ: Princeton University Press, 1983), 418-419.
[xiv] IBID, 418.
[xv] Henry Stapp, Quantum Theory and Free Will (Springer International Publishing, 2017), Kindle Locations 870-874.
[xvi] Michael S. Gazzaniga, The Consciousness Instinct (New York: Farrar, Straus and Giroux, 2018), 7.
[xvii] Michael S. Gazzaniga, The Consciousness Instinct (New York: Farrar, Straus and Giroux, 2018), 204.
[xviii] IBID, 206.
[xix] Egnor, Michael. “A Map of the Soul,” First Things, June 29, 2017. https://www.firstthings.com/web-exclusives/2017/06/a-map-of-the-soul.
[xxi] Michael S. Gazzaniga, The Consciousness Instinct (New York: Farrar, Straus and Giroux, 2018), 7.
[xxii] IBID 232.
[xxiii] IBID 238.
[xxv] Sean Carroll, The Big Picture (New York: Dutton, 2016), 310-311.
[xxvi] IBID, 357-358.
[xxix] Science Uprising, episode 2. https://www.youtube.com/watch?v=rQo6SWjwQIk&list=PLAlYJfGzZjETnUDgYgSpOffJNKNYtnnIV&index=3
[xxxi] (Benjamin Libet, “Do We Have Free Will?”, Journal of Consciousness Studies, 6, No. 8–9, 1999, pp. 47–57. (https://static1.squarespace.com/static/551587e0e4b0ce927f09707f/t/57b5d269e3df28ee5e93936f/1471533676258/Libet%2C+Do+We+Have+Free+Will%3F.pdf)
[xxxii] Richard Dawkins, Science in the Soul: Selected Writings of a Passionate Rationalist (New York: Random House Publishing Group, 2017) Kindle Locations 786-794.
[xxxiii] Thomas Suddendorf, “Inside Our Heads: Two Key Features Created the Human Mind,” Scientific American Special Issue: The Science of Being Human. Sept 2018, volume 319 No 3. (pp. 43-47).
[xxxiv] Justin Halberda, “Logic in babies: 12-month-olds spontaneously reason using process of elimination.” Science, March 16, 2018. Vol 359 Issue 6381, pp. 1215.
[xxxv] Bert Hölldobler and E.O. Wilson, The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies (New York: W.W. Norton & Company, 2009), 58-59.
[xxxvi] IBID 60.
[xxxvii] James Gorman, “Wolf Puppies Are Adorable. Then Comes the Call of the Wild.” New York Times, October 13, 2017. https://www.nytimes.com/2017/10/13/science/wolves-dogs-genetics.html
[xli] IBID 47.
[xliii] Richard Dawkins, The God Delusion (Boston, MA: Houghton Mifflin Harcourt, 2011), 34.
[xliv] Richard Dawkins, Science in the Soul (New York: Random House, 2017), Kindle Locations 636-644.
[xlv] Richard Dawkins. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe Without Design (London: Folio Society, 2007), 38.
[xlvii] Michael J. Behe. Darwin Devolves: The New Science About DNA that Challenges Evolution (New York: Harper One, 2019), 277.
* Abacus Photo by Crissy Jarvis on Unsplash.