Sunday, April 29, 2007

Crossing the Desert (or, why there's no such thing as a mindful automaton)


"One of the main misconceptions of our times is that materialism and physicalism are rational positions. Ever since the beginning of civilization, real rationalists have realized how absurd this view really is. It is about time to come to our senses." - Titus Rivas


What is consciousness?

There are those for whom the question is rather unimportant, for whom, after all, it doesn’t really matter all that much… and there are those for whom it does matter, quite a bit. Even among those who believe it does matter, there are those who believe the answer is a rather uninteresting thing, another mundane, unsurprising checkmark on our “to explain” list… while, on the other hand, there are those who believe the answer is astounding. What’s more, however, is that the way someone views the world can be profoundly influenced by where they stand in relation to these questions. As I’ve realized that, the question of what consciousness is has become more and more important to me over the past months… understanding it more deeply has become my most earnest pursuit.

I woke up one morning recently, after having spent part of the previous night thinking about the nature of consciousness, and a thought experiment came to my mind to help elucidate, not just for myself but for others also, just what consciousness is and why it is so important. Here it is: a machine and a person must both cross a desert--not an unobstructed stretch of dune, but a rocky one with large boulders--some very large ones--and other obstacles throughout. The only rule is to start on one side of the desert and wind up at a point on the other side, when the task will be completed (similar to the DARPA Grand Challenge, except not a race). How do we describe how these both cross the desert? Is there actually any essential difference in the kind of description involved?

First, let’s think about the machine. It has been designed and programmed to cross this desert; it is a computer-operated vehicle whose program is called “cross the desert” (it doesn’t have to be conceived of as a simple program; lets consider it a quite sophisticated AI application). What is involved in the complete description of how this machine crosses the desert?

The machine has wheels and motor-mechanical parts like any vehicle, and it also has photoreceptors and other sensors (including a compass or GPS), information processing units, and its programmed instructions. So it starts out with those, on one side of the desert. Its program consists, first, of “move in this direction” which it begins to do--which is described simply as the motor-mechanics moving the machine in the direction specified by the program as guided by the built-in compass or GPS. However, very soon it encounters a large obstacle. Photons (light particles) carrying information about this obstacle enter the photoreceptors of the machine, which in turn send an electric signal--“sense data” about the obstacle--to a processing unit which processes information about the vehicle’s environment and orientation. This unit, in turn, is connected to a governing device which is programmed to alter the course of the vehicle when it receives information about such an obstacle, in order to avoid it, and then the vehicle’s orientation is corrected again by these directional mechanisms to point it again toward it’s programmed goal/endpoint. The governing devices are connected to the motor-mechanical devices, sending signals that control the movement and direction of the wheels based on the programmed directive of avoiding obstacles and remaining in the correct orientation until the goal is reached. In this way, the machine crosses the desert, navigating around the many obstacles it encounters, and arrives at the target destination point, at which point its program turns off.

This description of how the machine fulfils its task of crossing the desert is completely physical. There is no description of anything like a mind, or consciousness, or what the machine “experiences” in the course of all its sensing and directing activities while crossing the desert. Yet the description of what happened is complete; no experiential information is needed. The truth is, the machine doesn’t experience anything. There is nothing to do any experiencing. There is only mindless cybernetic operating based on programming. The machine, no matter how sophisticated otherwise, no matter what impressive kind of massively parallel processing, would never have an account of what it is like to be that machine while doing this. Even if it were to make a “detailed informational model” of its surroundings and orientation, it would only be an informational model in the sense of instructions given by the sensors to the governing devices via processors. There is nothing to “experience” this model. It’s just information, being processed between two parts of the machine.

What also must cross the desert is, as I said, a person, a human being (it could also be a fox, or an ostrich, or a lizard, or any creature conceivably capable of the journey, but I’ll stick with a human for simplicity’s and conceptualization’s sake). How does the person cross the desert, what entails the complete description of that?

The person begins moving across the desert, with the knowledge of what she must do to take care of herself while crossing this desert and how to get to the other side (and the necessary supplies, of course). Very soon she encounters the first obstacle in her direct path. Photons carrying information about this obstacle enter the photoreceptors of the person--the eyes--which in turn send an electric signal via the optic nerve--sense data about the obstacle--to the brain. Same as the machine, right? So far. What happens now that the sense data is being sent to the brain? What must be described to cover every aspect of what’s happening, to make a complete description?

Well, we know the visual cortex is responsible for interpreting that kind of visual sense data, which information is sent to the areas associated with spatial-temporal reasoning in the cerebral cortex and will be interpreted as “a boulder is in the way”, that other areas of the cortex associated with reasoning are associated with the decision to move around that boulder, and that the primary motor cortex is associated with the actual movement of the muscles to carry out that decision. But is that all, does the visual cortex simply send that message to other information processing areas of the brain to make an auto-course-correction, so the obstacle can be avoided and the person will continue on her way across the desert without running herself into boulders?

There are many neuroscientists and philosophers who seek to explain it in that way because they think it must be explainable in that way in order to fit the classical materialist paradigm. But we actually already have a firm grounding, materialism or no materialism, for the understanding that it can’t all be explained in that way. While I will get into exactly why below, it’s worth first mentioning that it was found by early neuroscience researchers (particularly Wilder Penfield and John Carew Eccles) that, while they could automatically stimulate certain brain functions, like sense data and other perceptions, they could not automatically stimulate the processes that do something with those sense data and perceptions--they couldn’t manipulate mental effort and attention. For that reason, they abandoned the automaton hypothesis.

So if it isn’t just an auto-course-correction that goes on inside our desert trekker’s head and guides her around boulders, what does happen? Would it then be in principle a total mystery, as some suggest?

I think it’s hard to consider it a mystery when it’s so very, very simple--so simple in fact that it’s the most obvious and basic empirical description any of us has about anything. She, the person, experiences the perception of a boulder in her field of vision--she experiences that perception in her conscious mind. She has a model of the world alright, just as the machine may have, made mostly of visual information--but it isn’t just information, it isn’t just signals sent for automatic governing devices and mindless course corrections. She experiences seeing the boulder, and she also experiences making the conscious choice to go around it, before turning back in the right direction, checking her compass or GPS or whatever she needs to stay on the right track. Her physical effort to carry this out comes directly from mental effort to navigate her way--to pay attention to her surroundings and decide what she must do to avoid obstacles in those surroundings, and make her muscles do what she needs them to do to make that happen. If she’s very tired, if she’s fighting a natural tendency for her vision to become less attentive to her surroundings or for her muscles to stop responding to her directions, she has to apply even more attention and mental effort. But the important thing, the thing that makes her stand out from the machine, is this: the experiential quality of her model and her sense information, and her mental effort in conscious choices, can’t be described in physical terms, in terms of the brain and its electrochemical activity. It just can’t; philosophers of mind have tried to figure out a way to do it for a long time, and with all we know and continue to learn about the brain, they’re still failing to do so.

We know a lot about what is happening in the brain, we know what centers of neural activity are correlated with what kinds of mental reports, but we can’t reduce this mental report to the neural activity. The mental cannot be expressed in terms of the physical--an ever-remaining explanatory gap which is the nature of David Chalmers’ well-articulated “Hard Problem of Consciousness”. That problem is one which, when someone understands it sufficiently, they understand that no amount of improvement in understanding of neural correlates of consciousness (NCCs) will resolve it. Better models of brain functioning can’t reduce the mental to the physical; they can only better model the physical. It can’t be said that when I decide to walk around a boulder, that just is neural activity in the spatial reasoning and other reasoning centers of my brain. It may be associated or correlated with that activity, but it is not the same as it.

This is the source of the persistent bane that in philosophy has been known as dualism, which continues to take a determined stand against physicalism/materialism (and more specific ideas such as computationalism or functionalism). Mind persists, and cannot be subsumed under the heading of matter.

Let’s delve for better understanding into more examples of how the human and the machine are different in crossing the desert. Let’s say, for whatever engineering reason it wouldn’t be all in the machine from the start, the machine has sensors that are able to detect when it is low on oil or gas, causing mechanisms via the processor to seek out and inject more from strategically-placed oil or gas reserves. Does the machine feel hungry or thirsty, or not-well-lubricated? Does it make a conscious decision to take care of its needs?

A human does have sensors in the body to tell her when her cells are getting diminished supplies of water and glucose for cell respiration/ATP production. Those sensors send signals to be interpreted in her brain, conveying that information. But is that all of the story of what happens? No, we need a mental, not just a physical, description to tell us what is going on. She experientially feels hungry and thirsty, and she makes a conscious decision, based on those feelings, to get her water and food out of her bag and consume them. Then again, she could decide not to; she could decide to stick it out a little longer, pace herself, and ration them for a later time when she might need them even more. Of course, the machine could have governing mechanisms for fuel injection and speed control, similar to rationing, too--but it’s automatic. Nothing experiences anything in the operation of those cybernetic governing mechanisms. And nothing would experience anything in the electrochemical cybernetic governing mechanisms of a three-pound meat machine, either--if that was all that was going on.

So what is the true source of the difference, then? What makes the person, and her mind and her brain, different from the machine?

I could begin seeking the answer to such a question only after I truly realized, and became increasingly concerned with, the existence and true nature of it (which took some time and a lot of thought). As a tentative position, I embraced emergentism--the idea that the mental somehow “emerges”--irreducibly--from the physical. Mind cannot be reduced to matter (body, brain), but still somehow comes purely from matter. The paradoxes of that idea, however (held by such well-respected philosophers of mind as John Searle), eventually dissatisfied me too much. If consciousness comes only from matter, how could it be causally efficacious on matter? And if it weren’t causally efficacious, why would it have evolved at all (and why do we experience the causal efficacy of our conscious thoughts and decisions?). These are problems of emergentism and epiphenomenalism (the idea that consciousness is a side-effect of neural activity and not causally efficacious) that were put forth at least as far back as William James, but as of yet no one has answered them. Considering the nature of the problems, they can’t be resolved. While emergentist forms of dualism realize that mind cannot be reduced to matter, they open up their own set of intractable problems. Realizing this lack of viability, I was forced to continue searching, with uncertainty about what the answer might be or even if one could be found.

Then, when I did find it, I knew that I had found it, because things began to fall into place in a way I hadn’t imagined they ever could--all it required was a complete paradigmatic shift in my view of life and the universe. The best, most descriptive answer to this problem, the best explanation I’ve found for this difference between a human mind and a machine, the one that resolves the most problems and paradoxes in a satisfying and intuitive way, is one that makes use of the realization that the features of the mind are strikingly similar to and deeply connected with features of quantum physics. The human brain, it turns out, is not a classically describable system like the processors in the desert-crossing machine, or any machine or computer, are. What the application of science is, and what reality and the universe itself are, isn’t like what we thought they were for the centuries since Newton’s formulation of classical physics--in fact, we’ve known that for over three-quarters of a century (or at least it’s been known in quantum physics--the rest of the world has been slow to catch on). At the atomic and subatomic level, we enter a whole new world. What we have is not hard, round little billiard-ball-like objects existing objectively, as conceptualized in classical physics. What we have is actually an unrealized quantum wave function, a smear or cloud of possibility, something that is more idea-like than actual-material-like.

So if everything is this unresolved spread of potentia, how do we observe an actual state of anything? It is consciousness that causes the collapse of this wave function, turning evolving waves of possibility into actuality. Consciousness decides which, of all possible brain states, is the one that is actually experienced. A machine never reduces the wave function, the cloud of potentialities, into an actuality. As Einstein postulated, if a Geiger counter is placed next to a radioactive particle, the counter, on its own with no one there to observe it, would not register a spike of the pen on the paper, indicating the event of radioactive decay, at any specific time. Instead, it would be a spread out wave or cloud of many blips or spikes for all the potential times the particle could have decayed. The machine, or any machine, never does that reduction. It is only reduced to a single spike on the page when observed by a conscious observer. The formulators of quantum mechanics (particularly Bohr, Heisenberg, Pauli, Wigner, Bohm, and von Neumann) painstakingly arrived at this conclusion, after rigorous experimentation, debate, and attempts to find a way around it: that a conscious observer not only observes, but participates in and affects the system being tested--and does so by free choices that are outside the causal structure of the system being tested. As von Neumann worked out the fine mathematical details of quantum mechanics, the system could be described to include a room and everything in it, including the instruments and the observer’s own body (including sensory organs and brain), and the free choice and the reduction would still be in the mind of the observer. Exactly where the cause of the choice comes from remains unanswered by science--but we certainly have the sensation or perception that it is something that is “us” that makes the choice, that directs the brain with the mind by selecting from among the possibilities that the unreduced cloud of brain-potentialities has to offer. The brain has a pilot, which is mind--the dynamics of self-referential quantum measurement opens a cockpit for mind with an intimate connection of instruments and controls. Like any pilot, the mind is constrained by the options afforded to it by the vehicle; however those options are usually pretty extensive (particularly with a vehicle like a quantum-dynamical brain).

For the specific way in which the brain is described as a quantum dynamical system, I'm referring to the model submitted by Schwartz, Stapp, and Beauregard, rather than the one put forth by Penrose and Hammeroff. The Penrose-Hammeroff model, which relies on structural microtubules in cell walls for a quantum state of superposition, has been criticized because the quantum effects of mostly-structural elements would be washed out by the hot, wet environment of the brain (environmental decoherence). The Schwartz-Stapp-Beauregard model, however, points to the certainty of quantum effects in important functional parts of the brain, starting with the activity in ion channels in the synaptic cleft which determine the release of neurotransmitters, which control neural firing patterns--thus basic to the electrochemical functioning of the brain. Because activity in ion channels takes place on an atomic level, it is beyond any doubt in the realm of quantum description, a description which leads to a superposition of many possible brain states due to the many different options available in the cloud of potentiality that we conceptualize, in a classically-simplified way, as the brain (the authors give a detailed account of how the conscious mind of an observer is able to affect the evolving wave function through the quantum zeno effect).

After considering these things, I realized that there is no way theoretically possible that the brain can be described as a classical system capable of being modeled algorithmically in the same way as (or on) a machine. If salient features of its functioning, not just structural features, start at the atomic level, which it is substantially shown that they do, then we have reached the end of the period when we could notionally describe or imagine the brain as a Turing machine. The universe, and the life that evolved in it, is different from our classically describable systems and computation.

Considering the doors this opens in understanding the nature of consciousness, let’s go back to the questions I considered at the beginning--does the answer to the question “what is consciousness?” matter very much? And if so, is the answer uninteresting, or is it astounding?

If there were no reduction events, no collapse of the wavefunction, the brain would remain in a superposed cloud state, never resolved into an actual state--and we would never have an actual experience. It would be just the same as Einstein’s Geiger counter without an observer. However, there is an observer, we know what it is--the conscious mind we experience every waking moment of our lives--and the state of the brain and all of the universe that we encounter is reduced to an actuality every moment by that observer. This is what allows the universe to be self-aware; without it there would be no awareness. This, of course, is the realization of the Ghost in the Machine (the one against which Gilbert Ryle rhetorically argued). The Hard Problem of Consciousness is resolved when we understand the vanity of trying to reduce mind to brain when mind is neither brain nor brain functioning, but the process that selects the brain’s actual functioning from among many possibilities. It is also an essentially non-dual resolution: we understand about the universe that it is not two kinds of things, mind and matter, but one kind of thing--and it isn’t matter. Consciousness, long banished to the nether regions of respectable science and philosophy by overzealous positivism and materialism, is not only brought back as real, but as the basis of all that is real, as the ground of all being. Perhaps most importantly, the possibility for scientifically-viable spirituality and ethics is revived for a spiritually and ethically impoverished (Western) world. We understand that our conscious choices do shape our world, we do literally create our own reality, and all is not a mindless Newtonian clockwork dance of matter and energy in space-time, a swirl of dust in the wind. It seems the question does really matter. And the answer is astounding.

In the darkness of the materialist paradigm that dominates the current academic setting, we’ve forgotten our own consciousness as easily as Gollum forgot his name in the cave in J.R.R. Tolkein’s Lord of the Rings trilogy--even though consciousness is a thing which is more obvious and personally accessible to us than our own names. And we’re seeking after a goal as damaging and futile as happy position of the One Ring of Power, as well, in trying to achieve a materialist description of human beings and consciousness--because, after all the pretension towards progress and improvement (and power), it can only result in bad theories and the attendant nihilism that goes with them. To truly progress in realizing who and what we are, we have to board the Ship of Consciousness and, like Frodo, sail to the beckoning shores of realization that await us.

It is actually pretty simple, after all. When the human being of my thought experiment finally reaches the end of the desert, parched and battered by the harsh, forbidding trek, she can look back on her experiences. She can look back on them and realize that she experienced them, like no machine ever could. And she can know from those experiences, if from nothing else in the world and from no other thought, that she was and is a conscious being. Maybe we can come to our senses as well.

Labels: , , , , , , , ,

3 Comments:

Blogger Michael Anissimov said...

Why can't we just make machines conscious just by giving them structural elements that depend on quantum processes at the atomic level?

7:12 PM  
Blogger pas said...

I've thought about it that question, and honestly, I'm not sure. We are so far from even plausibly being able to artificially recreate anything like a single biological neuron, the salient features (including quantum features at the atomic level) of which depend on the fine details of its biological structure. And it isn't just structural elements that depend on quantum processes at the atomic level, because the structual elements of everything depend on quantum processes at the atomic level. It is self-referential quantum measurement that is important for allowing consciousness to enter the causal structure.

If we were able to create something like that, however, I don't think it could be considered a machine, so, at last, we'd have created a conscious being, but not a conscious machine.

8:03 PM  
Anonymous Anonymous said...

What Michael says.
G.

12:46 AM  

Post a Comment

<< Home