Who perceives the meaning behind the medium?
Can an abacus do arithmetic? Does it even know what numbers are? Of course not. It’s just a tool that we use to keep track of the data that we perceive.
What if we use a stick to move the beads on the abacus? Or what if we hook up a system of levers attached to symbol keys so that, for example, if we push the symbol 5 twice we get ten beads on the abacus? Now is the abacus doing arithmetic? Of course not. We are still doing the arithmetic and using the abacus to keep track of the data, just like we can use our fingers or our laptops to keep track of data.
What if we go down to the river and build a water wheel (a task which would involve a good deal of arithmetic and other math), and then use that to turn a generator (which would require more math). Then, instead of using levers to move the abacus beads we use electric currents, and instead of using beads we use LED lights? Now has our little machine done arithmetic? No. It is still, in principle, no different than the abacus—just a tool that we use to process the information that we perceive. Just stop and consider, starting from absolute scratch, how many tens of thousands (at least!) of man-hours it would take to build such a contraption—hours of people thinking, counting, creating plans to mine ore and to fashion lights and to build parts according to rational designs.
Well what if, instead of pushing buttons, we program a recorder to match certain sound waves with certain currents—a rational system that would be the same, in principle, as the system which matched certain symbol keys with certain levers. Oh let’s just go ahead and build a robot that can walk into a kitchen in Texas and record that there are three apples on the table and four in the refrigerator. Then the robot calls someone in Spain and declares, “Hay tres manzanas en la mesa y cuatro manzanas en el refrigerador, ¡así que tienes siete hermosas, jugosas y deliciosas manzanas rojas! ¡Qué maravilloso! ¿Quieres que te haga una tarta de manzana?”[i] (It could have said the same thing in 1,263 other languages.) Now would we finally have a machine that can perceive numbers and comprehend arithmetic?
Of course not. We would have a machine that certainly appears to comprehend arithmetic just like we have movies with dinosaurs that certainly appear to be real, just like we have smartphones that project beautiful singing, just like we have plant-based “cheeseburgers” that taste like cows. Keep in mind that to build a robot that can send electromagnetic waves to an orbiting satellite—that requires so many years of competitive research and development (including the discovery of all sorts of complex mathematics) that we are surely into the billions of man-hours now. We can be impressed by movies that cost hundreds of millions to make, and we can be impressed by robots. But don’t let the CGI fool you. The dinosaurs cannot bite and the robots cannot count.
But why? Why can we declare with absolute certainty that such a robot does not comprehend arithmetic? In answering that question we could launch into a deep philosophical discussion here about the meaning of the word comprehend and about the nature of intelligence and epistemology, etc. Or we could save our spleens, side-step the philosophical banter, and simply say that the reason no physical machine can perceive arithmetic is because numbers are immaterial/nonphysical. If no light waves emanating from something or bouncing off of it, then there is nothing for a robot to “see”. And if no sound waves are emanating from something or bouncing off of it, then there is nothing for a robot to “hear”. Etc. There is really no other explanation needed. No matter how powerful artificial intelligence (AI) and big data become, any appearance of intelligence will always, only be artificial. As Robert J. Marks, Electrical & Computer Engineering Professor at Baylor University, put it, “Big data is ignorant of meaning.” He says that those who argue that eventually AI will be able to perceive any sort of meaning in a collection of data are appealing to an algorithm-of-the-gaps. “They say it can’t be done now, but maybe they’ll develop computer code to do it someday. Don’t hold your breath.”[ii]
Computers are simply, fantastic tools. We can use robots to discover things that we could not otherwise discover—such as the chemistry on Mar’s surface—just like we can use supercomputers to solve problems that we could not otherwise solve, just like we can use cranes to lift things that we could not otherwise lift, just like we can use telescopes to see things that we could not otherwise see. But in all cases we are the ones perceiving the information. Our tools don’t perceive it. As Neurosurgeon Michael Egnor put it:
The hallmark of human thought is meaning, and the hallmark of computation is indifference to meaning. That is, in fact, what makes thought so remarkable and also what makes computation so useful. You can think about anything, and you can use the same computer to express your entire range of thoughts because computation is blind to meaning. Thought is not merely not computation. Thought is the antithesis of computation. Thought is precisely what computation is not. Thought is intentional. Computation is not intentional.[iii]
Now just as we asked whether robots will ever be able to count, we could do a similar thought experiment regarding whether they could ever see. We could go back to the beginning and instead of considering the perception of arithmetic we could consider the perception of a sunrise. Does an old-fashion camera—the kind that used film—perceive a sunset. What if, instead of using an old fashion camera, we used a digital camera that recorded the picture on a flash drive? Or what if the sunset were recorded by a hi-definition 3-D movie camera attached to a robot in Acapulco that could then send electromagnetic waves to a satellite in order to call someone in Siberia and begin singing, “О, какое прекрасное утро!”?[iv] Would you pick up the phone and declare, “That contraption can see! It can see! It’s ALIVE!!!”?
Many materialists will, in fact, stubbornly declare that if something appears to be conscious, then it is conscious. As Google technologist Ray Kurzweil put it, “My own leap of faith is this: Once machines do succeed in being convincing when they speak of their qualia and conscious experiences, they will indeed constitute conscious persons.”[v] Qualia here refers to the perception of information, which is defined as “a quality of matter”. And even though the word quality is abstract, the faith that Kurzweil is referring to is faith in the materialistic, monistic worldview.
If you think that something appears to be a conscious being, then it is a conscious being? Just let that sink in for a moment. Why would anyone draw that conclusion? Well, if you presuppose that the brain can perceive information—if you presuppose that we are our brains—then you can assume that somehow, some way, silicon circuitry will someday be able to do the same things that our neural circuitry can do. Therefore, when machines appear to perceive something, then they perceive it.
But let’s back up a bit: do we really need to presuppose that the brain perceives meaning? Kurzweil, et al would never, under any circumstances, ask this question. Not because it’s not a good question, but because they have already presupposed the answer. Since it would be impossible for a naturalist to explain how an immaterial human soul evolved, and since evolution must be true, therefore an immaterial human soul did not evolve. We must be our brains. Therefore, the brain must perceive meaning. Therefore, we will not ask this question.
Let’s ask it anyway.
What about the human brain?
Can the human brain perceive the meaning behind the media? Well, if we followed the same logic that we used above for computers, we would invariably come to the same conclusion: there is no way the three-pound organ in our skulls could perceive nonphysical phenomena. It could not use the five senses to perceive something that cannot be directly or indirectly seen, heard, felt, tasted, or smelled. As neuroscientist Stanislas Dehaene puts it, “If these objects [i.e. mathematical patterns] are real but immaterial, in what extrasensory ways does a mathematician perceive them?”[vi] We must either conclude that the “objects” are physical or that the brain cannot perceive them any more than an abacus can.
Right? Let’s consider the evidence. We’ll look at three types of evidence: presumptive, circumstantial, and experimental.
Here is a question that certainly sounds like an impasse for believing in spirituality: how would something nonphysical ever “push” something physical? Isn’t that as nonsensical as asking how a square could be a circle, or how black could be white, or how the brain could use the five senses to perceive something immaterial? Wouldn’t it completely defy the laws of physics and take us into the land of unicorns and fairy tales?
Therefore, since we cannot defy the laws of physics, the scientific establishment presupposes materialism. That means that both information and the ones perceiving information simply must be physical things. No other option can be tolerated. Our minds simply have to be matter-in-motion. As Francis Crick, co-discoverer of the double-helix structure of DNA, put it, “‘You,’ your joys and your sorrows, your memories and ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.”[vii] Did you notice the quotation marks he put around you? As he explained in an interview with The New York Times, “The view of ourselves as ‘persons’ is just as erroneous as the view that the Sun goes around the Earth.”[viii]
Now although Crick’s worldview is the norm in university science departments today, there are certainly a number of rogue scientists who find this stance to be patently absurd. As one Harvard professor and a couple of his colleagues who dared to challenge the establishment put it, “It amounts to saying that table salt, once it enters the body, finds a way to dissolve in the blood, enter the brain, and in so doing, learns to think, feel, and reason.”[ix]
Table salt learning to think? What in the world are they talking about? Well just follow their line of reasoning: if our minds are physiological phenomena comprised of neurons, chemicals, and other such biological stuff, then all of the molecules that comprise that stuff come from the food we eat—including the molecules that you might acquire while eating your basic cheeseburger and fries sprinkled with table salt. So according to a materialistic worldview, some of those salt molecules could, later down the line, be bound into thoughts. That’s ultimately what it would mean to say that “we” are our brains and that our conscious minds are physical things.
Although that may sound bizarre, bizarreness isn’t supposed to phase scientists, right? They strive to ignore incredulity and just stick to the facts. After all, heliocentrism, for example, probably would have sounded bizarre to the likes of Plato or Laozi. Or imagine trying to explain to your average 5th century BCE farmer what galaxies are. If we want to understand our universe, we have to keep an open mind. So yes, according to MIT physicist Max Tegmark, table salt can learn to think, feel, and reason:
I approach this hard problem of consciousness from a physical point of view. From my perspective, a conscious person is simply food, rearranged. So why is one arrangement conscious, but not the other? Moreover, physics teach us that food is simply a large number of quarks and electrons, arranged in a certain way. So which particle arrangements are conscious and which aren’t?[x]
Those are the questions they are fervently working to answer today. Nevertheless, there is still a dinosaur in the room—a really, really big one, even an Argentinosaurus. There is a profoundly enormous mystery that they never, ever talk about in the laboratory and that they insist can only be discussed in the philosophers’ lounge: any and all information is immaterial. Yet no matter how hard they try, whole herds of Argentinosauruses keep finding their way into the laboratory. As John D. Barrow, professor of mathematical sciences at Cambridge University, put it:
A mystery lurks beneath the magic carpet of science, something that scientists have not been telling, something too shocking to mention except in rather esoterically refined circles: that at the root of the success of twentieth-century science there lies a deeply ‘religious’ belief—a belief in an unseen and perfect transcendental world that controls us in an unexplained way, yet upon which we seem to exert no influence whatsoever.[xi]
So if asking “How would something nonphysical ever ‘push’ something physical?” seems like an impasse for believing in spirituality, then asking “How would something physical ever ‘comprehend’ something nonphysical?” is an equal impasse for believing in materialism. Are we missing something here?
That would be a very generous way to put it. As Professor Barrow said, there is something that scientists have not been telling.
Indeed, the fact is that for a century we have known that there is a very coherent scientific explanation for how a nonphysical mind can “push” the gray matter of the brain. This explanation comes not when we ask “How does the mind direct matter?” but instead when we ask “When does the mind direct matter?” Now the scientists who first discovered the explanation weren’t asking either of those questions at all. They were doing something completely different, but when they compiled the data that conclusion leapt off the page from completely out of the blue. As one of the world’s leading physicists, Henry Stapp, put it, “The strangle-hold of materialism was broken simply by the need to accommodate the empirical data of atomic physics, but the ontological ramifications went far deeper, into the issue of our own human nature and the power of our thoughts to influence our psycho-physical future.”[xii]
We will explore that explanation later. For now, suffice it to say that the presumptive evidence for a physiological mind is vacuous. To explain away the mystery of comprehension—what Einstein called “the eternal mystery of the universe”—by presupposing materialism is not just arbitrary. It’s artificial—as artificial as those plant-based cheeseburgers.
Let us now consider the circumstantial evidence for believing that we are our brains and that consciousness and rationality are physiological phenomena.
When the circumstances of a person’s brain change—such as due to injury—do their mind and personality change as well? If yes, many argue that this is evidence that we are our brains.
Professor Michael S. Gazzaniga, professor of psychology at the University of California, Santa Barbara, and head of the SAGE Center for the Study of the Mind, has for decades led pioneering studies in how the two halves of the brain interact. When he explains his work with split-brain patients, he can give the layperson a basic understanding of the brain’s breathtaking complexity.
But when it comes to explaining consciousness, as with everyone else, Gazzaniga begins with the materialistic presupposition. In his recent book, The Consciousness Instinct, he joined the trend in scientific studies of consciousness by seeking “to examine how matter makes minds”.[xiii] He says that the most important science to consider when studying the mind is actually the study of split-brain patients—those people whose two brain halves have been severed so that the two sides have no connection with each other. When this happens, each side continues to function as it were unaware that the other half ever existed at all. For example, if the left side of the brain perceives something, the right side is completely unaware of it—and vice versa.
Gazzaniga says this means either that a person experiences simultaneously different perceptions, or that a person is literally split into two different minds inhabiting the same body. He is in the later camp.
The new “yous” have two minds with two completely separate pools of perceptual and cognitive information. It is just that only one of the minds can readily speak. The other initially cannot. Perhaps, after many years, it will be able to produce a few words. More crazy yet, in the early months after surgery, before the two hemispheres get used to sharing a single body, one can observe them in a tug-of-war.[xiv]
He compares the experience of split-brain patients to a game of Whac-a-Mole—the arcade game where a mole puppet repeatedly pops up through different holes and the player needs to whack the mole with a mallet before it disappears again.
We are aware, which is to say conscious, of the processed information as it pops up after being processed in a particular hemisphere. But is that because each neural process activates a “make-it-conscious network” (which would have to be present in each hemisphere), or does each process in and of itself possess the neural capacity to appear conscious? I am in the latter camp.[xv]
This is an example of the type of circumstantial evidence that monists will offer to support their assumptions. It’s an interesting argument and all the research is truly fascinating. But there is plenty of room for skepticism and thus for choosing the other school of thought, which says that a split-brain patient, as Gazzaniga put it, “experiences simultaneously different perceptions.”
That is to say that the observations would also be consistent with the notion that a nonphysical consciousness (a soul) could be doing its best to manage a damaged brain—to play the game of Whac-a-Mole—similar perhaps to how a driver tries his best to operate a damaged car. I’ve had a beater sitting in my driveway for many months (since one of my sons will get his license soon) because several of the transmission sensors aren’t working. Although it has both a good transmission and a good engine, due to the bad sensors and the even worse mechanic (that would be me) the two do not communicate well with each other at all. So whenever I do drive the contraption it will start shifting randomly, at which time I have often been heard muttering, “This…thing has a mind of its own!” Of course, the human brain is about a billion times more complex than an automobile.
Now that’s an exaggerated analogy. In fact, according to Dr. Michael Egnor, a neurosurgeon and professor of neurological surgery and pediatrics at Stony Brook University, you could easily work and interact with a split-brain patient without ever knowing or suspecting that anything is different about them. He says that the changes resulting from split-brain surgery are so subtle that most people wouldn’t notice them. So he’s in the opposing camp from Gazzaniga, arguing that split-brain patients maintain a unity of mind even as they struggle to manage their handicap. “The post-operative patients experienced peculiar perceptual and behavioral changes, but they retained unity of personal identity—a unified intellect and will.”[xvi]
Egnor says the circumstantial evidence not only points to a unity of mind in split-brain patience, but also points to the mind being an immaterial phenomenon.
I have scores of patients who are missing large areas of their brains, yet who have quite good minds. I have a patient born with two-thirds of her brain absent. She’s a normal junior high kid who loves to play soccer. Another patient, missing a similar amount of brain tissue, is an accomplished musician with a master’s degree in English.[xvii]
Although Gazzaniga certainly disagrees that the evidence points to something spiritual, there is one conclusion that all the researchers—materialists and non-materialists alike—can agree on: a person’s conscious mind pervades all parts of the brain. “There is not one centralized system working to produce the grand magic of conscious experience,” Gazzaniga says. “It is everywhere, and you can’t seem to stamp it out, not even with a wide-ranging brain disease like Alzheimer’s.”[xviii]
In the end, although the circumstantial evidence might be debatable, the burden of proof still lies with the materialists. For they are the ones claiming to have an explanation. Just keep the Argentinosaurs at bay.
In treating brain diseases and injuries, Egnor says scientist have discovered compelling in support of an immaterial mind. One leading neurosurgeon, Wilder Penfield (1891-1976), switched from monism to dualism based on his work with thousands of patients.
First, Penfield realized that although he could surgically stimulate many physical effects on his patients, he could never cause rational responses. Neurosurgeons often operate on their patients while they are awake because we don’t sense pain on our brains. So as Penfield was operating he would stimulate parts of a patient’s brain that would cause them to move their arm, feel a tingling in their leg, see a flash of light, etc.
But Penfield noted that, in probably hundreds of thousands of different individual stimulations, he never once stimulated the power of reason. He never stimulated the intellect. He never stimulated a person to do calculus or to think of an abstract concept like justice or mercy.
All the stimulations were concrete things: Move your arm or feel a tingling or even a concrete memory, like you remember your grandmother’s face or something. But there was never any abstract thought stimulated.[xix]
Similarly, Penfield noted that although many of his patients had a variety of physical seizures, no one ever had an intellectual seizure.
If arithmetic and logic and all that abstract thought come from the brain, every once in a while you ought to get a seizure that makes it happen. So [Penfield] asked rhetorically, why are there no intellectual seizures? His answer was, because the intellect doesn’t come from the brain.[xx]
Penfield also noted that he was never able to stimulate a person’s will. For example, by touching part of their brain with a surgical instrument he could make their arm move. And then he would ask the patient to move their arm themselves. He would alternate back and forth between stimulating arm movement and asking the patient to move their arm. But, as Egnor explains, Penfield realized that he could never fool the patient. They always knew whether it was Penfield or themselves that was moving their arm. “So he said the will was not something he could stimulate, meaning it was not material.”[xxi]
Regarding free will, neuroscientist Benjamin Libet (1916-2007) did some interesting research which has been misinterpreted on a vast scale. He explored what neuroscientists call a readiness potential, which is activity in the brain that precedes muscle movement. Since this brain activity occurs a split second prior to a person making a decision, many have argued that this provided evidence that free will is an illusion. We just think we’re making decisions, but in fact our brains have already decided something for us. Egnor is incredulous at how this interpretation completely contradicts Libet’s own conclusions.
Libet himself explicitly endorsed the reality of free will, emphatically he endorsed the reality of free will. And Libet pointed out that his research unequivocally supports the reality of libertarian free will. But his experiments are described very often both in the scientific literature and in the popular press as supportive of materialism—which is something that they don’t support and something that Libet made very clear was not his conclusion.[xxii]
Now, to be sure, Libet did follow the naturalistic trend of presupposing monism. He was indeed searching for experimental evidence that the brain produces consciousness. But the point is that, contrary to popular reports, he did not find any evidence that free will is an illusion. “My conclusion about free will,” he wrote, “one genuinely free in the non-determined sense, is then that its existence is at least as good, if not a better, scientific option than is its denial by determinist theory.”[xxiii]
Aside from misrepresentations of Libet’s work, I’m afraid I actually could not find any experimental evidence for the materialistic view of consciousness. This could be because they start with a bland, trivial, obvious presupposition that clearly does not need to be proven. Or it could be because I have done a poor job of investigating the matter. Regardless, always grazing in the background is that friendly herd of Argentinosauruses, and I assure you that you will not find a scientific theory as to how the brain perceives them—how the organs in our skulls physiologically perceive nonphysical phenomena. Again, how does the brain perceive something that cannot be directly or indirectly seen, heard, felt, tasted, or smelled? They will pull every rhetorical trick in the book (which we will explore when we ask “Why?”) to avoid asking that question.
It almost seems like an unfair question, doesn’t it? My point is that all the evidence supports what we intuitively and logically know to be true: of course the brain cannot perceive immaterial phenomena. We do perceive it, but we are not our brains. We are just as nonphysical as is information (the meaning behind the media). Now we will come back to the question of how a nonphysical mind could control the physical brain. But before we go there, let’s put this conclusion into context by considering whether other animals can perceive meaning. If we nonphysical minds can use our brains, in principle, the same way that we use our laptops and our rocket ships, do other animals use their brains in the same way? Do other animals perceive meaning?
What about other animals?
When animals exchange information with each other, are they communicating in the way that we humans communicate? Or, instead, are they simply machines that process information in the way a laptop or a satellite processes information? Do animals perceive information any better than robots do?
Scientists have concluded that there is a categorical difference between human language and the method of communication of any other species. As evolutionary biologist Richard Dawkins, an emeritus fellow of New College, Oxford, and the University of Oxford’s Professor for Public Understanding of Science from 1995 until 2008, put it:
Humans are unique in many ways. Perhaps our most obviously unique feature is language. Whereas eyes have evolved between forty and sixty times independently around the animal kingdom, language has evolved only once.[xxiv]
As our use of language is unique, so also may be our use of mathematics. Although some species may at first appear to comprehend basic math, upon further study scientists have often concluded that they are simply doing what they are programmed to do. Some claim that Ravens, for example, comprehend arithmetic. However, Thomas Suddendorf, professor of psychology at The University of Queensland, Australia, says the evidence for that is far from conclusive. He said the studies of Ravens have followed training through reinforcement, and that human cognition is unique:
Yet the achievements of the ravens, as well as cognitive feats of apes in other studies, can be explained in simpler ways. It turns out that animal and human cognition, though similar in many respects, differ in two profound dimensions. One is the ability to form nested scenarios, an inner theater of the mind that allows us to envision and mentally manipulate many possible situations and anticipate different outcomes. The second is our drive to exchange our thoughts with others. Taken together, the emergence of these two characteristics transformed the human mind and set us on a world-changing path.[xxv]
Similarly, he said that although many animals use tools, and that some even make tools, there is no evidence that the animals recognize or are aware of what they’re doing.
Chimpanzees in Senegal have been reported to make rudimentary spears that they thrust into tree hollows to kill bush babies. But there is as yet no observation that they practice thrusting, let alone throwing. Unlike humans, they could not benefit from the invention of a spear thrower. You can safely give them one of yours; they would not use it as we do.[xxvi]
Based on our current scientific understanding, we could skeptically conclude that animals are programmed to adapt to their environment in the same way that we can program Mars robots and other machines to adapt to their environment. By contrast, from infancy humans are different, for human infants display unique reasoning. Justin Halberda, associate professor of psychology and brain sciences at Johns Hopkins University, says that even in infancy humans have a unique ability to reason. Referring to research done by other scientists (Cesana-Arlotti et al.) he elaborates on this mystery:
The careful crafting of stimuli and clever analyses of infants’ spontaneous looking behavior by Cesana-Arlotti et al. show us that infants have the capacity to reason by process of elimination. By contrast, whereas nonhuman animals such as dogs facing similar situations of ambiguity may ultimately form the right conclusion, they appear to arrive at this hunching using an associative rather than logical process.[xxvii]
In other words, whereas dogs and other animals may appear to make logical deductions, the evidence actually suggests that they are instead running on instinct when adapting to different situations.
As another example, consider ants, which can have highly organized caste systems and divisions of labor. Although we humans look at an ant colony and identify a governmental “queen”, any appearance of centralized authority or control is an illusion, according to Bert Hölldobler, Professor of Life Sciences at Arizon State University and former professor of zoology at Harvard University, and Edward O. Wilson, a Harvard professor for nearly five decades. They say the best way to understand social insects (ants, termites, bees, and wasps) is to see them not as societies but as computer programs. Instead of operating according to a central command, there is self-organization in which each insect acts independently of the others
To add one last concept from computer science, social insect workers are cellular automata, defined as agents programmed to function interactively as a higher-level system. They have this trait because their colony as a whole lacks command and control by a still higher-level system. It therefore must be self-organized. Through the combined senses and brains of its members, the colony operates as an information-processing system. The environment challenges it with problems: the workers must locate an adequate nest site, find the right food items and bring them home, establish home ranges and territories, defend against enemies, and care for the helpless young. These disjunct problems press on the colony at almost all times. The algorithms of individual development and behavior contain the solutions to all of them. With algorithms, the colony masters the problems natural selection has designed it to solve. The requisite information is distributed among the colony members. Thus, a distributed colony intelligence is created greater than the intelligence of any one of the members, sustained by the incessant pooling of information through communication.[xxviii]
They give the example of the fungus-growing termites of Africa (similar to the leaf-cutting farmer ants of North America), which build massive, intricately designed nests with an architecture that regulates both the temperature and the composition of the interior air. Through precise engineering, the entire nest functions as an air conditioning system that keeps the central living area within a degree of 30oC and the carbon dioxide concentration between 2.6 and 2.8 percent.
While such emergent properties are marvelous to behold, their engineering is not intrinsically mysterious. The extremes of higher-level traits may at first appear to have a life of their own, one too complex or fragile to be reduced to their basic elements and processes by deductive reasoning and experiment. But such separatist holism is in our opinion a delusion, the result of still insufficient knowledge about the working parts and processes. An important merit of insect sociobiology, as opposed to vertebrate and especially human sociobiology, is that the colony organizations it addresses contain a large array of emergent phenomena simple enough to be explained by scaling up from the behavior of the constituent elements. This is the advantage provided us by the small brains of the social insects and the general quick and simple decisions they must make with limited algorithms.[xxix]
The point, again, is that animals are most likely only doing what they were programmed to do. They are not actually using logic or reason. It is entirely possible that animals are not sentient beings, having no more ability to comprehend arithmetic than does a calculator or a math textbook, and having no more ability to comprehend the meaning behind the medium than does a robot or perhaps a communications satellite. And whereas all the materialistic arguments for why we are our brains fall apart when applied to humans, they may work quite well in explaining the behavior of all other species.
Wait, animals might not be sentient beings?! Seriously? This understanding of things will be particularly hard to stomach if we think of our pets. Nevertheless, let’s cut to the chase: is it possible that dogs are simply machines and that they do not have conscious minds like we do? What would that mean?!
Well, consider what it is that dogs are really good at compared to other animals. For example, spiders are very good at building webs to catch prey. That’s their specialty. That’s how they survive and thrive. Is it possible that spiders, like ants, have no idea what is actually happening? Is it possible that they are not sentient beings? We know they can’t communicate like Charlotte in Charlotte’s Web, but is it possible that they don’t even perceive basic web geometry? Could they simply be amazing little machines?
Or consider beavers: their specialty is building dams. Is it possible that they don’t perceive engineering any more than a spider perceives geometry, or any more than an ant perceives colonial government or than a laptop perceives calculus? Did you know that if you play a recording of running water next to a beaver, it will immediately start digging.
Now back to dogs: what is their specialty? They cannot build webs or dams. For that matter, they cannot survive in the wild on their own. (Urban areas are not wild so they don’t count.) They don’t know how to hunt in packs or to build dens for the winter. So what is the one amazingly complex thing they can do—as amazing as spinning a web or building a dam or running in a pack or building an elaborate nest with a precisely controlled air-conditioning system? What is the dog’s specialty?
They can look people in the eye and grovel. That’s how we humans have re-engineered their social programming. By contrast, their ancestors, gray wolves, can never do that. If a wolf ever looks you in the eye, be very afraid. Even if a wolf spends 24/7 with a (non-bathing!) human from the day of its birth, it will not bond with that human. Wolves won’t show the slightest concern for their human caretakers. As New York Times reporter James Gorman put it:
If one of the people who has bottle-fed and mothered the wolves practically since birth is injured or feels sick, she won’t enter their pen to prevent a predatory reaction. No one will run to make one of these wolves chase him for fun. No one will pretend to chase the wolf. Every experienced wolf caretaker will stay alert. Because if there’s one thing all wolf and dog specialists I’ve talked to over the years agree on, it is this: No matter how you raise a wolf, you can’t turn it into a dog.[xxx]
By contrast, Gorman says, “Even street dogs that have had some contact with people at the right time may still be friendly.”[xxxi] That’s what we have bred them to do.
Michael Behe, professor of biochemistry at Lehigh University in Pennsylvania, says that dogs are broken wolves. That is to say that the mutations that turned a wolf into a dog were largely negative. Even changes that we might see as advantageous, such as the strength of a bulldog, result from destructive mutations.
Most of them break or damage preexisting genes. For example, increased muscle mass in some breeds derives from degradation of a myostatin gene. We also know what mutations cause a yellow coat [loss of FCT of melanocortin 1 receptor], short tails [loss of FCT of the protein coded by the mutated T gene], even the lovable friendliness of dogs towards humans [disruption of genes GTF21 & GTF2IRD1], compared to less friendly wolves.[xxxii]
Behe said its highly unlikely that the broken genes could ever un-break back to their original working state. So dogs’ only hope is that we humans will always be glad to have them around. Perhaps we can think of them like good books that we can enjoy reading and re-reading. You could name your dog after other non-sentient individuals like Gandalf or Bilbo.
Looking at a dog, it feels terribly counterintuitive to think of him as a machine. So it might help again to remember how counterintuitive it would have been for a tenth-century farmer to look at the sun and believe that he was orbiting it on a spinning sphere. Or consider how counterintuitive it was for Einstein to look at a glass of water realize that it had about billion dollars’ worth (in today’s currency) of kilowatt energy in it. Or remember how we talked about how you can watch The Lord of the Rings with a movie camera sitting next to you on the couch and realize that that camera not only doesn’t see the drama, it also doesn’t even see the colors—anymore than a book comprehends the meaning of words like blue and gray. Cameras can’t see any more than radio’s can sing, any more than ants can vote, any more than a communication satellite comprehends the meaning of the word hello, any more than Voyager comprehends the meaning of the word gravity, any more than Perseverance (the robot we’re sending to Mars to look for signs of life) comprehends the meaning of the word Mars, any more than Gandalf comprehends the meaning of the word ring, any more than your dog comprehends the meaning of the word sit.
What’s the alternative? What if dogs are sentient beings who can perceive the meaning behind the medium? What if they are in fact conscious and self-aware in the way we humans are? What would that mean? It would mean we would be attributing to them an immaterial nature, even a nonphysical soul. But as things are there is plenty of room for skepticism for that conclusion, whether for dogs or chimpanzees or any other animals. Again, all the work that the materialists have done to try to show that we humans are our brains, that our minds are simply matter in motion without any immaterial nature—all that work and all those explanations could be applied to animals in arguing that they are purely physical creations.
Who is an author?
Considering authorship might be the best way to put all this into perspective. In the conclusion to What are words? I suggested that a good definition of an immaterial soul is “a rational, creative author”. So when we ask, “Who perceives words?” the clearest answer might be “that which can use words creatively”.
So, regarding animals, do we know whether any animals are creative? Do birds ever author new songs, or can we skeptically conclude that they are only singing what they have been programmed to sing? Do beavers ever author new dam designs or new uses for engineering, or is it instead plausible that they are only doing what they are programmed to do?
By comparison, computers cannot do anything creative, according to Robert J. Marks, Electrical & Computer Engineering Professor at Baylor University. “They can only take the data which they’ve been presented and interpolate. They can’t, if you will, think outside of the box.”[xxxiii] He says that although computers can be programmed to simulate art, they cannot actually create art, and any appearance to the contrary is entirely artificial. Thus, for example, if you program a computer to produce music with the status quo of Johann Sebastian Bach, it can shuffle data around and output music that sounds like Bach, but only Bach and never Stevie Wonder or BTS. Or if you input patterns for a Leonardo DaVinci painting, it can simulate a DaVinci painting, but it will never output Pablo Picasso or Stephen Hillenberg. “Creativity is something which computers will never accomplish.”
Marks and his team applied evolutionary computing to swarm intelligence—the kind of programming that Hölldobler and Wilson say reflects the social insects. They set up a predator-prey scenario with dweebs and bullies. The bullies would chase around and destroy the dweebs and programmed the computer to maximize the lifespan of the dweeb colony.
The result that we got was astonishing and very surprising. What happened was that there was self-sacrifice that the dweebs learned. One dweeb would run around the playground and be chased by the bullies and self-sacrifice himself…They would kill the dweeb and then other dweebs would come out and self-sacrifice themselves. But by using up all of the time in order to survive, the colony of dweebs survived for a very, very long time. Which was exactly what we told it to do.
Once we looked at that we were surprised by the results, but we looked back at the code and said, ‘Yeah, of these thousands and millions of solutions that we proposed, yeah we see how this one gave us the surprise. So surprise can’t be confused with creativity. If the surprise is something within the domain of what the programmer decided to program, then it really isn’t creativity. The program has just found one of those millions of solutions that worked really good possibly in a surprising manner.[xxxiv]
Dolphins and many other species demonstrate the same self-sacrificing behavior as the dweebs in Marks’ program, and many people see it as evidence of self-awareness. Yet, again, there is plenty of room for skepticism that the animals may only be doing what they were programmed to do.
Marks says that machine “learning” requires millions of examples. By contrast, humans learn from a few examples. Furthermore, once children have learned something they can apply that knowledge to new topics. Machines cannot. As Satya Nedella, CEO of Microsoft, put it, “One of the most coveted human skills is creativity, and this won’t change. Machines will continue to enrich and augment our creativity.”[xxxv]
Another possible sign of creativity is tool use. Yet again, there is room for skepticism, according to Robert Shumaker, biologist and coordinator of the Orangutan Language Project at the National Zoo’s Think Tank. He gives the example of the bolas spider, which is named after the throwing weapon used by South American gauchos. Bolas spiders make a ball from silk and then throw it at prey to catch them.
When an insect flies by, they throw it and it attaches to the insect because it’s sticky and they reel them in. It’s very complex. Very impressive. Very dramatic. But all available information tells us that it’s completely controlled from this animal’s genetic history.[xxxvi]
At the end of the day, our creative abilities are mired in mystery says linguist Noam Chomsky, Professor of Linguistics (Emeritus) at Massachusetts Institute of Technology:
There is interesting work on precepts for language use under particular conditions—notably intent to be informative…but it is not at all clear how far this extends to the normal use of language, and in any event , it does not approach the Cartesian questions of creative use, which remains as much of a mystery now as it did centuries ago, and may turn out to be one of those ultimate secrets that ever will remain in obscurity, impenetrable to human intelligence.[xxxvii]
Who is/are The Author(s)?
Now let’s return to the question of authorship. When Hölldobler and Wilson describe social insects behavior, they call it an “emergent” phenomenon. That is to say that evolution did the programming. Similarly, Shumaker says that evolution is what programmed bolas spiders and other species to use tools. Chomsky likewise presupposes that our linguistic abilities evolved.
What would it mean to say that all these rational explanations evolved? Is that any different from saying that dictionaries evolved or that mathematics evolved? Would we humans be the first known ones to comprehend all these explanations?
Yes, according to evolutionary biologist Richard Dawkins, who teaches the materialistic assumption that our ability to think and reason creatively is nothing but matter-in-motion inside our skulls. “Human thoughts and emotions emerge from exceedingly complex interconnections of physical entities within the brain.”[xxxviii] As with everyone else, he embraces the emergent explanation with reckless abandon, stating it as a presupposition but leaving it to others to flesh out.
Natural selection of selfish genes gave us big brains which were originally useful for survival in a purely utilitarian sense. Once those big brains, with their linguistic and other capacities, were in place, there is no contradiction at all in saying that they took off in wholly new ‘emergent’ directions, including directions opposed to the interests of selfish genes. There is nothing self-contradictory about emergent properties. Electronic computers, conceived as calculating machines, emerge as word processors, chess players, encyclopedias, telephone switchboards, even, I regret to say, electronic horoscopes. No fundamental contradictions are there to ring philosophical alarm bells. Nor in the statement that our brains have overtaken, even overreached, their Darwinian provenance. Just as we defy our selfish genes when we wantonly detach the enjoyment of sex from its Darwinian function, so we can sit down together and with language devise politics, ethics and values which are vigorously anti-Darwinian in their thrust.[xxxix]
Now, speaking of words, Dawkins’ comparison of the emergence of “human thoughts and emotions” with the emergence of computers here is very common, but it is also thoroughly self-contradictory. We easily talk, as Dawkins does, about the evolution of the computer, just as we talk about the evolution of the automobile or the assault rifle or the fajita. But these are all examples of the evolution of design at the hands of rational, creative people. Improvements and developments—such as from calculating machines to telephone switchboards and supercomputer chess players—“emerge” as a result of the intentional, self-aware, concentrated, competitive mental effort of intelligent designers. However, when biologists use the term evolution, there is a sudden semantic shift: rationality, creativity, and intentionality are all explicitly excluded from the emergent process, to be replaced by randomness, chance and “Natural Law”. That is to say that Dawkins et al want to have their intelligently designed cake and eat it, too. “In the case of living machinery,” says Dawkins, “the ‘designer’ is unconscious natural selection, the blind watchmaker.”[xl]
Just stop and consider what this brilliant scientist wants us to simply take for granted. For starters, he wants to take Natural Law for granted. Now it takes a rational, creative person at least ten years of study before they can even begin to comprehend such laws. And they are so deep and profound and complex that the brightest minds can spend a lifetime studying them. Yet, according to Dawkins et al, the author, or “designer”, of such laws is just a Cosmic Blind Watchmaker.
Next, consider how important the words randomness and chance are to Dawkins’ explanation. Those words are absolutely essential to comprehending evolutionary theory, yet he just wants to take them for granted. But these words are only coherent in the context of a dictionary of at least, say, 5000 words, and they only have meaning in a much broader context of non-randomness and order—just like the word three only has meaning in the context of a number line, or just like you can only have the chance of drawing a lucky lottery ticket if there are millions of other people contributing to that lottery within a well-ordered, well thought-out, rational, creative, non-random economy. For randomness is the absence of order and chance is the absence of planning. You simply cannot have one unless you also have the other.
Yet again, the author of this dictionary, the “designer” of all this order, is a cosmic watchmaker who is not only blind but also deaf, mute, and as senseless as a block of wood. And the Naturalists are assuming the authority to discover and reveal to the world who this Blind Watchmaker is, and to reveal what the words in his dictionary mean, and to reveal what his explanations are for all of existence.
That’s a lot to take for granted. But we have seen this many, many times before all throughout human history. It is exactly like those people who assume the authority to take a senseless block of wood, carve it into a mysterious image, plate it with gold, embed it with jewels, set it high on a pedestal, and then declare unctuously unto mankind, “Behold your Creator!”
We cannot be the authors of the Author of life.
But we can author some things, and that makes us unique from every other species. So now let us return to the question of how. How could an immaterial mind possibly author the actions of a material brain? How could something nonphysical ever “push” something physical—even if that nonphysical thing was a tiny neuron? That’s still an excellent question.
But it may not be the right way to ask the question. Instead of asking how let’s ask when. As we explored why we asked the question “When do words occur?”, words precede every single quantum of the cosmos. Before anything happens there is that ever-enigmatic, unpredictable sentence called the wave function. And what has completely flummoxed scientists from the day they discovered this sentence is that it is absolutely unpredictable. When confronted with it, Einstein said it had to be a mistake, for “God does not play dice.” To which his colleague Niels Bohr replied, “Einstein, don’t tell God what to do.”
Let me repeat that: an entirely unpredictable sentence precedes every single quantum of the cosmos. Could there be an author behind such sentences? That doesn’t mean you need to be able to “speak quantum” in order to use your brain any more than you need to be able to speak Java in order to use your laptop. Behe put it this way:
At the most basic level of matter, the quantum level, events are understood by most physicists to be physically uncaused. Perhaps there are nonphysical events that can affect quantum ones in a purposeful way, in turn affecting the brain.[xli]
That’s what we will look at next, when we ask, “How do we perceive words?”
* Abacus Photo by Crissy Jarvis on Unsplash
[i] “There are three apples on the table and four apples in the refrigerator, so you have seven beautiful, juicy, red delicious apples! How wonderful! Do you want me to make you an apple pie?”
[iii] Michael Egnor, “Neurosurgeon Outlines Why Machines Can’t Think,” (Mind Matters, July 17, 2018) https://mindmatters.ai/2018/07/neurosurgeon-outlines-why-machines-cant-think/
[iv] “Oh, what a beautiful morning!”
[v] Ray Kurzweil. How to Create a Mind: The Secret of Human Though Revealed. Ray Kurzweil. (New York: Penguin Books, 2012.) p. 209-210.
[vi] Stanislas Dehaene, The Number Sense: How the Mind Creates Mathematics, Revised and Updated Edition (New York: Oxford University Press, 2011), 225.
[vii] Francis Crick, The Astonishing Hypothesis (London: Simon & Schuster, 1994), 3.
[viii] Margaret Wertheim, “SCIENTISTS AT WORK: FRANCIS CRICK AND CHRISTOF KOCH,” New York Times, April 13, 2004, https://www.nytimes.com/2004/04/13/science/scientists-work-francis-crick-christof-koch-after-double-helix-unraveling.html.
[ix] Menas Kafatos, Rudolph E. Tanzi, and Deepak Chopra, “How Consciousness Becomes the Physical Universe,” in Consciousness and the Universe, ed. by Sir Roger Penrose, Stuart Hameroff, and Subhash Kak (Cambridge, MA: Cosmology Science Publishers, 2017), Kindle Locations 2297-2300.
[x] Max Tegmark. Life 3.0: Being Human in the Age of Artificial Intelligence. Alfred A. Knopf. 2017. (p.284-285)
[xi] John D. Barrow. Pi in the Sky: Counting, Thinking, and Being (Oxford: Clarendon Press, 1992) 1.
[xii] IBID 759-761.
[xiii] Michael S. Gazzaniga, The Consciousness Instinct (New York: Farrar, Straus and Giroux, 2018), 7.
[xiv] Michael S. Gazzaniga, The Consciousness Instinct (New York: Farrar, Straus and Giroux, 2018), 204.
[xv] IBID, 206.
[xvi] Egnor, Michael. “A Map of the Soul,” First Things, June 29, 2017. https://www.firstthings.com/web-exclusives/2017/06/a-map-of-the-soul.
[xviii] Michael S. Gazzaniga, The Consciousness Instinct (New York: Farrar, Straus and Giroux, 2018), 7.
[xxiii] (Benjamin Libet, “Do We Have Free Will?”, Journal of Consciousness Studies, 6, No. 8–9, 1999, pp. 47–57. (https://static1.squarespace.com/static/551587e0e4b0ce927f09707f/t/57b5d269e3df28ee5e93936f/1471533676258/Libet%2C+Do+We+Have+Free+Will%3F.pdf)
[xxiv] Richard Dawkins, Science in the Soul: Selected Writings of a Passionate Rationalist (New York: Random House Publishing Group, 2017) Kindle Locations 786-794.
[xxv] Thomas Suddendorf, “Inside Our Heads: Two Key Features Created the Human Mind,” Scientific American Special Issue: The Science of Being Human. Sept 2018, volume 319 No 3. (pp. 43-47).
[xxvi] IBID 47.
[xxvii] Justin Halberda, “Logic in babies: 12-month-olds spontaneously reason using process of elimination.” Science, March 16, 2018. Vol 359 Issue 6381, pp. 1215.
[xxviii] Bert Hölldobler and E.O. Wilson, The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies (New York: W.W. Norton & Company, 2009), 58-59.
[xxix] IBID 60.
[xxx] James Gorman, “Wolf Puppies Are Adorable. Then Comes the Call of the Wild.” New York Times, October 13, 2017. https://www.nytimes.com/2017/10/13/science/wolves-dogs-genetics.html
[xxxvii] Noam Chomsky, What Kind of Creatures Are We (New York: Columbia University Press, 2015), 128.
[xxxviii] Richard Dawkins, The God Delusion (Boston, MA: Houghton Mifflin Harcourt, 2011), 34.
[xxxix] Richard Dawkins, Science in the Soul (New York: Random House, 2017), Kindle Locations 636-644.
[xl] Richard Dawkins. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe Without Design (London: Folio Society, 2007), 38.
[xli] Michael J. Behe. Darwin Devolves: The New Science About DNA that Challenges Evolution (New York: Harper One, 2019), 277.