Understanding MindBrain

 
 
Random Article


 
Latest Posts
 

IDA on Will: It’s no Illusion

 

 
Overview
 

 
Summary
 
 
 
 
 


 


Bottom Line

The issue of free will is perhaps the most oft debated single issue in the history of philosophy (For annotated bibliographies see http://www.ucl.ac.uk/~uctytho/dfwIntroIndex.htm.). Whether philosophers or scientists, most modern materialists, believe the universe, at scales beyond the quantum, is deterministic. This leads them to class free will with magic. It’s only an illusion. But it […]

1
Posted December 29, 2002 by virgil

 
Full Article
 
 

franklinidaimage.gifThe issue of free will is perhaps the most oft debated single issue in the history of philosophy (For annotated bibliographies see http://www.ucl.ac.uk/~uctytho/dfwIntroIndex.htm.). Whether philosophers or scientists, most modern materialists, believe the universe, at scales beyond the quantum, is deterministic. This leads them to class free will with magic. It’s only an illusion.

But it seems abundantly clear from introspection, even to us materialistic scientists, that we do exercise will, even if it’s not free, that is, not magical. We make choices, even if they are deterministic, at least in principle. Sloman has made this distinction between will and free will quite convincingly (1992/3, see also Franklin 1995 pp. 35-40).

However, scientists have also learned not to be too trusting of introspection. There are too many examples of beliefs that are introspectively “abundantly clear” and, at the same time, just plain wrong. In the case of will, this is precisely the contention of D. M. Wegner’s “The Illusion of Conscious Will” (2002). The context of this essay is Thomas W. Clark’s review of Wegner’s book, which recently appeared on SCR.

The following passage from the review seems intended to express one primary thrust of the book.

Our folk-psychological theory of action interprets this regular sequencing of intention and behavior as causal, with the conscious, mental intention (the will) driving the physical effect (behavior). But, Wegner says, the actual causal story behind human behavior involves a “massively complicated set of mechanisms,” what he calls the “empirical will,” that produces both intention and action. Since we aren’t in a position to observe or understand these mechanisms, instantiated as they are by the complex neural systems of our brain and body, we construct an explanation involving the experienced, phenomenal will: we, as conscious, mental, willing agents, simply cause our behavior.

It essentially asserts that will, conscious mental intention, is an illusion facilitated by our ignorance of the mechanisms of action selection.

The IDA model (see below) tells us otherwise. It implements William James’ ideomotor theory of voluntary action selection (1890) as embraced in Baars’ global workspace theory (1988 Chapter 7). Conceptual and computational mechanisms are specified, providing hopefully testable hypotheses as to how voluntary action selection occurs in humans (Franklin 2000).

In what follows I intend first to briefly describe a little of the workings of the IDA model. I’ll then discuss its mechanisms for voluntary action selection in a little more detail. Next I’ll offer a different interpretation of data of Libet and others (1999, Libet, B. et al 1983, Haggard and Eimer.1999) that Wegner (or perhaps only Clark) uses to bolster his argument. Finally, I’ll remark on Sloman’s, in my view, definitive closure of the free will issue.

The IDA model is a conceptual and computational model of consciousness, implementing Baars’ Global Workspace Theory, and cognition, partially implementing theories of Barsalou (1999), Glenberg (1997), Sloman (1999), and Kintch, (1998). The computational model includes most of the conceptual model implemented in the form or a running software agent, IDA, with modules for perception, working memory, associative memory, action selection, deliberation, language generation, voluntary action selection, etc. IDA completely automates the work of a human personnel agent who assigns new jobs to US sailors at the end of a tour of duty, including negotiating with a sailor in natural language (Franklin and Graesser. 2001, Franklin 2001). (See http://csrg.cs.memphis.edu/csrg/html/papers.html for many online papers on the
IDA model.)

The IDA model is intended to facilitate thinking about issues in consciousness and cognition. Questions can be put to the model: How does it work in the model? What’s true in the model? The answers constitute hopefully testable hypotheses about how it’s done in humans. Here we apply the IDA model to questions of will.

In the IDA model actions are selected by the behavior net populated by behavior streams (Baars’ goal context hierarchies) made up of behaviors (Baars’ goal contexts). Each behavior is a coalition of behavior codelets (Baars’ processors, Edelman’s neuronal groups (1987), Ornstein’s small minds (1986)). These behavior streams are instantiated as a result of conscious broadcasts, but are themselves unconscious. Conscious broadcasts are generated by attention codelets (Baars’ processors). These attention codelets, (usually) watching additions to working memory and from associative memory, gather information codelets into coalitions, which compete for access to consciousness. The information carried by the winning coalition constitutes the contents of the conscious broadcast, which goes out to all codelets. Receiving the broadcast, behavior codelets that deem themselves relevant instantiate their behavior stream, bind its variables with information from the broadcast, and activate behaviors. (This is Baars’ conscious access hypotheses in action (2002).) The behavior net, at each moment, chooses the most appropriate behavior to execute. The action so chosen is taken by the behavior codelets that make up the executing behavior. This is what I refer to as consciously mediated action selection. Though facilitated by the conscious broadcast, the ensuing process resulting in the selected action being taken is entirely unconscious. Though supported by this process consciously mediated action selection, voluntary action selection is a more complex process requiring additional mechanism to implement James’ ideomotor theory.

We humans most often select actions subconsciously, but we also make voluntary choices of action, often as a result of deliberation (Sloman 1999). Baars argues that such voluntary choice is the same as conscious choice (1997, p. 131). We must carefully distinguish between being conscious of the results of an action and consciously deciding to take that action, that is, of consciously deliberating on the decision. The latter case constitutes voluntary action. William James suggests that any idea (internal proposal) for an action that comes to mind (to consciousness) is acted upon unless it provokes some opposing idea or some counter proposal. The IDA model furnishes an underlying mechanism that implements James’ ideomotor theory of volition and its architecture in a software agent.

Let’s suppose that IDA is at the point of considering which of several already evaluated jobs to offer a given sailor. Information about these jobs is currently in working memory. The players in this decision making process include several proposing attention codelets and a timekeeper codelet. A proposing attention codelet’s task is to propose that a certain job be offered to the sailor. Choosing a job to propose on the basis of the codelet’s particular pattern of preferences, it brings information about itself and the proposed job to “consciousness” so that the timekeeper codelet can know of it. (This and all the subsquently described activity is consciously mediated through the behavior net as described above.) Its preference pattern may include several different issues (say priority, moving cost, arrival time, etc) with differing weights assigned to each. For example, our proposing attention codelet may place great weight on low moving cost, some weight on priority, and little weight on the others. This codelet may propose the second job on the list because of its low cost and high priority, in spite of a late arrival time. If no other proposing attention codelet objects (by bringing itself to “consciousness” with an objecting message) and no other such codelet proposes a different job within a prescribed span of time, the timekeeper codelet will mark the proposed job as being one to be offered. If an objection or a new proposal is made in a timely fashion, it will not do so.

Two proposing attention codelets may alternatively propose the same two jobs several times. Several mechanisms tend to prevent continuing oscillation. Each time a codelet proposes the same job it does so with less activation in its coalition and, so, has less chance of coming to “consciousness.” Also, the timekeeper loses patience as the process continues, thereby diminishing the time span required for a decision. A job proposal may also alternate with an objection, rather than with another proposal, with the same kinds of consequences. These occurrences may also be interspersed with the evaluation of additional jobs. If a job is proposed but objected to, and no other is proposed, the evaluation process (consciously mediated) may be expected to continue yielding the possibility of finding a job that can be agreed upon.

Thus we have described how the ideomoter theory of voluntary action has been implemented in the IDA model, yielding hypotheses about how we humans do it.

Clark describes in general terms the areas form which Wegner adduces his argument as to the illusion of will. He then goes on to invoke Libet’s work.

That the phenomenal will might be an “add-on” is reinforced by a review of Libet’s (Libet et al, 1999) well-known findings that the consciousness of initiating voluntary movement is reliably preceded by the (unconscious) onset of a movement readiness potential. Such evidence suggests that consciousness, and therefore conscious will, can hardly function as the mental initiator or controller of action it is often taken to be.

To the contrary, the IDA model suggests an interpretation of the data so that Libet’s experimental work lends support to our implementation of voluntary action as mirroring what happens in humans. He writes (Libet et al1983), “Freely voluntary acts are preceded by a specific electrical change in the brain (the ‘readiness potential’, RP) that begins 550 ms before the act. Human subjects became aware of intention to act 350-400 ms after RP starts, but 200 ms. before the motor act. The volitional process is therefore initiated unconsciously. But the conscious function could still control the outcome; it can veto the act.” (Libet’s work has since been replicated and improved by others (Haggard and Eimer. 1999) but the new data doesn’t affect our argument.) Libet interprets the onset of the readiness potential as the time of the decision to act. Suppose we interpret it, instead, as the time an attention codelet decides to propose the action (a particular job be offered). The next 350-400 ms would be the time required for the attention codelet to gather its information (information codelets) and win the competition for consciousness. The next 200 ms would be the time during which another attention codelet (timekeeper) would would become active and wait for objections or alternative proposals from some third neuronal group (attention codelet) before initiating the action. This scenario gets the sequence right, but begs the question of the timing. Why should it take 350 ms for the first neuronal group (attention codelet) to reach consciousness and only 200 ms for the next? Our model would require such extra time during the first pass to set up the appropriate goal context hierarchy (behavior stream) for the voluntary decision making process, but would not require it again during the second. The problem with this explanation is that we identify the moment of phenomenal consciousness with the time of the broadcast, which occurs before instantiation of the behavior stream. So the relevant question is whether phenomenal consciousness occurs in humans only after a responding goal context structure is in place?

From Clarks review it seems clear that Wegner offers many other pieces of evidence supporting his thesis. I want to emphasize that this essay is intended to respond only to what was said rather explicitly in the review, and is not intended as a response to the book.

Clark’s review also brings up the illusory issue of free will.

But there may be more at stake that reinforces the illusion of total conscious control, namely the deeply ingrained, folk-metaphysical assumption that human beings possess contra-causal free will. If, as many suppose outside the scientific community, we are free to act in ways that are non-mechanistic, indeed, that defy mechanism, then this supposition invites a particularly strong interpretation of the sense of voluntary action: that we act for no cause but our self-generated will. But the causal power of this will is exactly what Wegner painstakingly exposes as an illusion.

I’ll like to emphasize that the IDA model in no way supports the notion of “contra-causal free will,” nor any other form of magic. For me the issue of free will has been more than adequately disposed of by Aaron Sloman. Here I’ll only refer to his work (1992/93) or to my version of it (Franklin 1995 pp. 35-40).

© Stan Franklin

Author Information

Stan Franklin is the W. Harry Feinstone Interdisciplinary Research Professor at the University of Memphis. He is the author of Artificial Minds. Homepage

References

  1. Baars, B. J. 1988. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press.
  2. Baars, B. J. 1997. In the Theater of Consciousness. Oxford: Oxford University Press.
  3. Baars, B. J. 2002. The conscious access hypothesis: origins and recent evidence. Trends in Cognitive Science 6:47-52.
  4. Barsalou, L. W. 1999. Perceptual symbol systems. Behavioral and Brain Sciences 22:577-609.
  5. Edelman, G. M. 1987. Neural Darwinism. New York: Basic Books.
  6. Franklin, S. 1995. Artificial Minds. Cambridge MA: MIT Press.
  7. Franklin, S. 2000. Deliberation and Voluntary Action in ‘Conscious’ Software Agents. Neural Network World 10:505-521.
  8. Franklin, S. 2001. Automating Human Information Agents. In Practical Applications of Intelligent Agents, ed. Z. Chen, and L. C. Jain. Berlin: Springer-Verlag.)
  9. Franklin, S., and A. Graesser. 2001. Modeling Cognition with Software Agents. In CogSci2001: Proceedings of the 23rd Annual Conference of the Cognitive Science Society, ed. J. D. Moore, and K. Stenning. Mahwah, NJ: Lawrence Erlbaum Associates; August 1-4, 2001.
  10. Glenberg, A. M. 1997. What memory is for. Behavioral and Brain Sciences 20:1-19.
  11. Haggard, P, and M. Eimer. 1999. On the relation between brain potentials and the awareness of voluntary movements. Experimental Brain Research 126:128-133.
  12. James, W. 1890. The Principles of Psychology. Cambridge, MA: Harvard University Press.
  13. Kintsch, W. 1998. Comprehension. Cambridge: Cambridge University Press.
  14. Libet, B. 1999. Do We Have Free Will? Journal of Consciousness Studies 6.
  15. Libet, B., C. A. Gleason, E. W. Wright, and D. K. Pearl. 1983. Time of Conscious Intention to Act in Relation to Onset of Cerebral Activity (Readiness-Potential:The Unconscioous Inition of a Freely Voluntary Act. Brain 106:623-642.
  16. Libet, B., A. Freeman, and K. Sutherland. 1999. The volitional brain:towards a neuroscience of free will. Thorverton, UK: Imprint Academic.
  17. Ornstein, R. 1986. Multimind. Boston: Houghton Mifflin.
  18. Sloman, A. 1992/3. How to dispose of the free will issue. AISB Quarterly 82:31-32.
  19. Sloman, A. 1999. What Sort of Architecture is Required for a Human-like Agent? In Foundations of Rational Agency, ed. M. Wooldridge, and A. Rao. Dordrecht, Netherlands: Kluwer Academic Publishers.
  20. Wegner, D.M. 2002. The Illusion of Conscious Will, Cambridge, MA:MIT Press.

virgil

 


One Comment


  1.  

    The self owns a free will, and the freedom of the will of the human being is the miracle of miracles of God , The Creator.

    When the self wants (“intentionally”) to choose “something”,the process of activation of the first neural groups takes time (information formation process in human beings). The difference in time mentioned above in the article and relevant to models, (if it can be paralleled/compared with the time consumed in human beings’ brain) is due probably to the “density” and lack of the decisive decision (oscilation/ hesitation and many other implications or impediments..etc.,) that the first neural group has to deal with before it could show up to the self. It needs time before it can successfully make its way (through their other sister neurons that are ever-moving, “competing”, filling in the site, and waiting to be chosen at any moment)to the “spotlight”.
    A second group may take less time being prepared to move forward towards the spotlight, when there is similarity (may be symantically, or of any kind of relation that you may imagine to exist) between the two groups, the first and the second. The first group, in a sense may facilitate the way for other followers. Therefore, I claim that decision making in the human brain is processed at least at a certain stage via a certain mechanism. Can anyone describe the process taking place in the human brain or confirm such claims from any perspective rather than the subjective (introspective one)?





Leave a Response