Understanding MindBrain

 
 
Random Article


 
Latest Posts
 

Why subjective consciousness?

 

 
Overview
 

 
Summary
 
 
 
 
 


 


Bottom Line

I needn’t remind the readers of Science and Consciousness Review of the many different meanings of the word “consciousness.” Some of these lend themselves to scientific study. In Baars’ A Cognitive Theory of Consciousness, Table 10.1 lists nine different functions of consciousness (1988, p. 349). But, what of subjective (phenomenal) consciousness, that is, first-person consciousness? […]

1
Posted April 4, 2005 by thomasr

 
Full Article
 
 

article_image_009.gif

I needn’t remind the readers of Science and Consciousness Review of the many different meanings of the word “consciousness.” Some of these lend themselves to scientific study. In Baars’ A Cognitive Theory of Consciousness, Table 10.1 lists nine different functions of consciousness (1988, p. 349). But, what of subjective (phenomenal) consciousness, that is, first-person consciousness? Can these tasks be performed without subjective consciousness? Does subjective consciousness have its own tasks, or is it epiphenomenal, that is, causally irrelevant? Some philosophers, and even some neuroscientists, have suggested as much (Eccles 1992).

In a recently published article Bjorn Merker identifies a vital task for subjective consciousness, and hypothesizes this task as providing the genetic pressure for its evolution (Merker, 2005). Here’s the abstract of that article.

“The issue of the biological origin of consciousness is linked to that of its function. One source of evidence in this regard is the contrast between the types of information that are and are not included within its compass. Consciousness presents us with a stable arena for our actions “the world” but excludes awareness of the multiple sensory and sensorimotor transformations through which the image of that world is extracted from the confounding influence of self-produced motion of multiple receptor arrays mounted on multi-jointed and swiveling body parts. Likewise excluded are the complex orchestrations of thousands of muscle movements routinely involved in the pursuit of our goals. This suggests that consciousness arose as a solution to problems in the logistics of decision making in mobile animals with centralized brains, and has correspondingly ancient roots.”

I invite you to join me as we explore what I consider an extraordinarily significant step forward in the scientific study of subjective consciousness.

Almost all of us think of subjective consciousness as being confined to biological organisms. Almost all of us think of subjective consciousness as being confined to biological organisms (For my earlier views on the possibility of machine consciousness see (Franklin 2003a). As a mathematician is wont to do, let’s consider the extremes in order to set boundaries. My immediate introspection assures me of my own subjective consciousness. I project this subjective consciousness onto you, since you’re so like me. Analogously, I even project it onto my cats. At the other end of the scale, I do not project subjective consciousness onto a bacterium swimming in some liquid medium. The bacterium, somehow, seems too simple to be subjectively conscious.

Let’s take both a physicalist and a Darwinian perspective on biology. If the bacterium isn’t conscious and you and I are, and if you and I evolved from some unconscious ancestor not too different from that bacterium, then somewhere in between subjective consciousness must have evolved. What was the adaptive evolutionary pressure that led to this evolution of subjective consciousness? Why subjective consciousness?

Possibly no such adaptive evolutionary pressure? Gould and Lewontin argue successfully that such competing themes as random fixation of alleles, production of nonadaptive structures by developmental correlation with selected features, etc. might account for traits observed to have evolved (Gould & Lewontin 1979). Can subjective consciousness be an evolutionary spandrel? I don’t think so. I suspect that subjective consciousness it too central a trait, and likely much too costly in brainpower, to have evolved and survived without serving a vital purpose. Merker suggests just such a purpose. Let’s explore it. Our exploration may take a little while, so be patient.

Any autonomous agent (Franklin & Graesser 1997) must continually answer Can subjective consciousness be an evolutionary spandrel? I don’t think so the question “What do I do next?” (See Franklin 1995, Chapter 16 on the action selection paradigm). Thus any animal that moves (ignore such strange creatures as sea squirts) must continually select actions appropriate to its current environment condition. We conjecture that five to ten times during every second we humans carry out a cognitive cycle to select such an action (Franklin et all in review, Baars & Franklin 2003)!

To be appropriate, that is, life sustaining, the choice of each such action must be based on an accurate assessment of the current state of the environment. That’s what senses are for. They support such assessments. To support accurate assessments, each such sense must yield a veridical view of the current environmental condition.

Now, consider any animal that moves and has spatially sensitive sense organs attached to its body. By spatially sensitive, I mean that movement of the sense organ produces apparent movement at the surface of the sense organ, independently of any change in the animal’s environment. For example, when I move my eye, the image on its retina changes rapidly regardless of what’s happening in the environment.

Let’s try an experiment to illustrate this point. Close your left eye. With your open right eye, fixate on an object a foot or so in front of you. With your right index finger, press gently on the upper right side of the right eyelid, and watch your object move. Did the object actually move? Of course not. The apparent motion was due to the movement of your eye caused by the pressure from your finger. Your outside environment had not changed.

This illustrates a crucial problem faced by every animal (robot?) that How does an animal distinguish actual movements in its environment from apparent movements produced by it own movement of its sense organs? moves and that has spatially sensitive sense organs. How does it distinguish actual movements in its environment from apparent movements produced by it own movement of its sense organs? To select appropriate actions, it must use its senses to accurately assess its environment. To do so, our animal must solve this problem somehow.

Merker cites the example of the earthworm whose tactile sense allows it to recoil when touched. However, the earthworm doesn’t recoil from the ground as it crawls over it. The earthworm, somehow, distinguishes self-produced, apparent motion from real motion in its environment. It solves the crucial problem. Merker attributes this solution to local mediation by giant fibers in the segmented worm’s ventral nerve cord.

But, what about animals like ourselves who have multiple sense organs, some, like our eyes, with countless degrees of freedom of motion? Our crucial problem becomes hugely more difficult. It’s Merker’s fundamental insight that subjective consciousness evolved to solve this crucial problem.

Let’s continue our exploration by carefully examining this insight. Our physicalist assumption assures us that there’s a real world out there. Can we know it? We can know parts of it, but only parts, through our senses. We humans can’t directly sense the radio frequency electromagnetic waves currently present in our environment, because we have no suitable sense organ. With what senses we do have, we each construct our own individual version of that part of the real world out there that we can know. Construct?

Yes, indeed. Construct. There is no red out there, only light waves of a certain wavelength. We construct the sensation of red from these waves of light. There are no sounds out there, only vibrations in the air. We We must learn to classify, to categorize, to construct objects qua objects construct sounds. There are no objects out there qua object. We construct them. I distinguish the computer monitor in front of me as a separate object, because it is useful for me to do so. A fly, buzzing around the room, would not likely so distinguish, because such an object would be of no use to the fly. The world doesn’t come nicely partitioned into bar coded objects, already classified. Rather we must learn to classify, to categorize, to construct objects qua objects. In the process we each construct our own individual world in which we live at its perpetual center. It’s this constructed individual world, unified, coherent and stable, that serves each of us as the basis for our continual choice of actions.

It’s part of Merker’s fundamental insight that subjective consciousness makes this individual, unified, coherent and stable constructed world possible. How does it do so? By solving the crucial problem discussed above, that is, by shielding us from apparent motion self-produced by our own movement of our sense organs. The other part of Merker’s fundamental insight says the fitness gains resulting from a solution of this difficult and crucial problem provided at least some of the evolutionary pressure that allowed subjective consciousness to evolve.

There we have it. Why subjective consciousness? To enable us each to construct an individual, unified, coherent and stable world in which to continually select appropriate, life-sustaining actions. It’s as simple as that.

Well, has Merker’s fundamental insight solved all the problems of subjective consciousness for us? By no means. We’re still left asking for the mechanisms that produce subjective consciousness. This is the physicalist version of the mind-body problem, that is, by what mechanisms is subjective consciousness produced by brains? But this fundamental insight should make the hunt for these mechanisms easier. It tells us about the task of subjective consciousness, so that we can now look for mechanisms that accomplish that task. We can look for mechanisms that correct for apparent motion self-produced by our own movement of our sense organs. This should guide our search. We now know what to look for.

We’re also left with the question of which other animals are subjectively conscious. Again Merker’s fundamental insight guides us. We know that almost every animal must solve for itself the crucial problem of distinguishing between apparent motion, self-produced by its own movement of its sense organs, from actual motion of objects in its environment. Asking how members of a given species solve this problem may well lead us to answering the question of its subjective consciousness.

In addition, we’re also left with the question of the possibility of non-biological subjective consciousness. Can a robot or a software agent ever be phenomenally conscious? For me this question specializes into Is What about non-biological subjective consciousness? my software agent, IDA (Franklin 2003a, 2003b), subjectively conscious? Though IDA is functionally conscious in that she implements much of Baars’ Global Workspace Theory (Baars 1997), I’ve never thought she was subjectively conscious. But, until now, I’d had no convincing argument against the possibility. (Philosopher of consciousness Dave Chalmers (1996) once suggested that I claim subjective consciousness for IDA, and challenge others to prove that she wasn’t (personal communication).) Now I can argue that IDA isn’t subjectively consciousness because she doesn’t move her single sense organ through space. The crucial problem, examined so exhaustively above, doesn’t exist for her, though the necessity for continual action selection based on sensory data certainly does. The idea of the cognitive cycle mentioned above originated from looking at IDA.

But, what about robots, like the Sony Aibo, with spatially sensitive sense organs moving in the world. They should face the same crucial problem that each such animal does. Suppose we build in mechanisms to shield the robot’s action selection from apparent motion self-produced by its own movement of its sense organs. Might such a robot be subjectively conscious? If our mechanisms allow the robot to construct its own individual, unified, coherent and stable world, subjective consciousness might well result. Is it possible to design such mechanisms? I don’t know, but I’m certainly going to work on it.

Though relatively lengthy, this commentary certainly doesn’t do justice to the depth of Merker’s article (2005). I urge you to read it for yourself.

And, thank you for accompanying me on this so detailed exploration of Merker’s fundamental insight. I’d be most interested in your comments on what I’ve said. (franklin@memphis.edu)

© 2005 Stan Franklin

References

  1. Baars, B.J. 1988. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press.
  2. Baars, B. J. 1997. In the Theater of Consciousness. Oxford: Oxford University Press.
  3. Baars, B. J., and S. Franklin. 2003. How conscious experience and working memory interact. Trends in Cognitive Science 7:166’172.
  4. Chalmers, D. J. 1996. The Conscious Mind. Oxford: Oxford University Press.
  5. Eccles, J. C. 1992. The Human Psyche. London: Routledge.
  6. Franklin, S. 1995. Artificial Minds. Cambridge MA: MIT Press.
  7. Franklin, S. 2003a. IDA: A Conscious Artifact? Journal of Consciousness Studies 10:47’66.
  8. Franklin, S. 2003b. What the IDA model says about the LR situation. http://www.sci-con.org/WW_IDAmodel.html.
  9. Franklin, S., B. J. Baars, U. Ramamurthy, and M. Ventura. in review. The Role of Consciousness in Memory. (available online at http://www.cs.memphis.edu/~wrcm/ under ‘draft online.’
  10. Franklin, S., and A. C. Graesser. 1997. Is it an Agent, or just a Program?: A Taxonomy for #Autonomous Agents. In Intelligent Agents III. Berlin: Springer Verlag.
  11. Gould, S. J., and R. C. Lewontin. 1979. The Spandrels of San Marco and the Panglossian Paradigm: A #Critique of the Adaptationist Programme. Proceedings of the Royal Society of London, Series B 205:581’598.
  12. Merker, B. 2005. The liabilities of mobility: A selection pressure for the transition to consciousness in animal evolution. Consciousness and Cognition 14:89-114 (Special Issue on the #Neurobiology of Animal Consciousness).

thomasr

 


One Comment


  1.  

    Man’s phenomenal experience is unique and and nothing may match the marvels given to him from his Creator (God). Whatever scientific progress achieved by man is due to those gifts. Man works on developing things, theorizes and discovers things i.e.: making a camera is a successul attempt to imitate the eye. Cameras improved: first produced white/ black photos, then coloured phtos, then the movie as you can watch a long film and may forget that you are watching it. However, the the original thing of all that is the eye that nothing may parellel, let alone surpassing.
    You may of course work on any subject including subjective consciousness and it mechanisms. This is quite fair for anyone to choose the area he likes to explore in, yet man’s consciousness can never itself be surpassed “outconscioused”, not even in a dream because even the latter is due to it or to the subconsious (the imagination power). Until you achieve any progress in your endeavor kindly think of this: How about a certain look “a special one” from the inside with closed eyes receiving by the mind’s eye whatever coming from the external world and embraced in, concentrate on the process and the sort of mechanism. Give it a try or follow up the events to be witnessed at conferences such as Tucson-organized ones on CONSCIOUSNESS.





Leave a Response