Understanding MindBrain

 
 
Random Article


 
Latest Posts
 

Information Integration Theory of Consciousness

 

 
Overview
 

 
Summary
 
 
 
 
 


 


Bottom Line

The shadow of the blackbird Crossed to and fro. The mood Traced in the shadow An indecipherable cause. -Wallace Stevens Introduction In 26 syllables, Wallace Stevens summarizes our current understanding of how the brain makes consciousness. For now, the signals we receive are indecipherable. We have to learn how to read the signals, and for […]

3
Posted December 27, 2005 by thomasr

 
Full Article
 
 
article_image_014.gif

The shadow of the blackbird
Crossed to and fro.
The mood
Traced in the shadow
An indecipherable cause.
-Wallace Stevens

Introduction

In 26 syllables, Wallace Stevens summarizes our current understanding of how the brain makes consciousness. For now, the signals we receive are indecipherable. We have to learn how to read the signals, and for that we need the right brain theory (1). During the past decade-and-a-half, Giulio Tononi has been an important theoretician of consciousness. In a series of groundbreaking papers, Tononi and co-workers have developed methods for quantifying several aspects of neuronal functioning for the brain viewed as a complex system. The paper discussed here is a summary of this work.

Tononi’s writings are noteworthy for his grounding in phenomenology, and his lucid style of presentation. He has noticed three aspects of conscious experience:

  1. given any definition of “conscious state”, the brain produces an infinity of them;
  2. each conscious state is prime, rather in the sense of a prime number; it cannot be deconvoluted into lesser states. Tononi terms this characteristic the “integration” of a state;
  3. conscious experience unfolds in well defined intervals, about 100 to 200 milliseconds to develop a fully formed sensory experience, about 2 to 3 seconds for a single conscious moment.

Tononi’s writings are noteworthy for his grounding in phenomenology, and his lucid style of presentation It is these three observations which Tononi seeks to understand in his information integration theory of consciousness. He discusses two hypotheses:

  1. consciousness corresponds to the capacity of a system to integrate information
  2. the quality of consciousness is determined by the informational elements of a complex, which are specified by the values of effective information among them.

What is integrated complexity and how can it be measured?

It is intuitively compelling to think that an entity is more conscious the more information it can bring to bear on life’s perturbations. Tononi captures this intuition in a thought experiment comparing a set of one million photodiodes to a system of minicolumns in the human brain. Each diode responds to photic perturbation by producing either an “off” state or an “on” state. The collection of one million photodiodes could thus produce some 21,000,000 states, given that each diode responds individually. In other words, there are no interactions among diodes. In brain, a system of minicolumns is internally coupled, and because of this, the number of possible states is much greater. Although Tononi does not display this value quantitatively, there is a kind of number, the Holderness number, which can be calculated for such systems where the elements interact and which yields an estimate of the number of states of these systems (2).

It is useful to think of a system whose processing elements can communicate among themselves (that is, where separate processing elements are integrated) as capable of providing context. The F metric has allowed for quantitative analysis of neural architectures In previous work, Tononi and others have operationalized the concept of information integration by developing a metric for the capacity of a system to integrate information, termed F (3). F is calculated by figuring the minimum amount of information that can be exchanged across a bipartition of a subset of n elements of a system. The F metric possesses several unique attributes:

  1. It measures the actual, rather than the average, integrated information, thereby permitting an ordering of brain structures.
  2. F is independent of the period of observation of a system or its irregularities over time.
  3. F is independent of external observers.
  4. F is independent of neural codes.

The F metric has allowed for quantitative analysis of neural architectures, and has demonstrated that architectures of large F values (why not call them “hifi architectures”) depend on elements which are both unique in patterns of interconnectedness and highly connected. Lofi architectures include 1. homogenously connected networks; 2. strongly modular networks (i.e. little interconnection); 3. randomly connected networks; 4. networks where connections are unidirectional only. Hifi architectures map onto cognitive functions utilizing rapid, bidirectional interactions among modular elements. The thalamocortical system is such an architecture, while cerebellar and basal ganglia systems are not.

In summary, hifi analysis allows for: 1. quantization of a fundamental intuition that conscious states are both highly integrated and highly differentiated; and 2. identification of areas of brain which meet these criteria.

The effective information matrix (EIM)

Tononi states: “the quality of consciousness is determined by the informational relationships that causally link its elements.” Drawing on graph theory, Tononi constructs an abstract relational space, termed the “qualia space”. He presents two hypothetical networks each with four elements and each with an F value of 10 bits. Network 1 has a divergent architecture, while network 2 has a chain architecture.

Tononi constructs an abstract relational space, termed the “qualia space” The effective information matrix defines the strengths of all possible connections within a network. EIM 1 is isomorphic with Network 1, and EIM 2 is isomorphic with network 2. Therefore, although state diagrams of the two networks are identical, each network is a unique mapping in qualia space due to the differences in their EIMs. It turns out that the EIM is not a way of discovering the neural correlates of consciousness (NCC), it is a method of distinguishing qualia states once a suitable NCC has been found by other means. Based on phenomenological and imaging data and hifi analysis, Tononi likes the distributed thalamocortical network (and its supporting cast) as the ground for both the specialization and integration of consciousness.

The time it takes

In any brain theory, experimentally derived processing times serve as a set of constraints. Hifi analysis suggests that in the early stages of percept formation, F values would be low, but they would grow in size over the next one to two seconds. Tononi mentions but does not detail several simulations where maximally differentiated responses in thalamocortical networks appear between tens of milliseconds and a few seconds, which is consistent with experimental observations

Summary

In my view, Tononi is one of a handful of theoreticians of consciousness who takes the scientific high road, that is to say, he thinks carefully about a problem, reckons a plausible line of attack, and carries through his research with appropriate methodologies. Only in this way is a cumulative research strategy possible. In the current paper, he recapitulates work on a metric for complex neural systems. He then outlines work in the developing area of neural correlates of consciousness. The theory of qualia presented herein is a finer grain approach to the Dynamic Core Hypothesis formulated by Tononi and Edelman. Is there evidence that challenges hifi as necessary for consciousness? The theory of the F metric is well stated but there are interesting implications of the theory not alluded to in Tononi’s paper. For example, although F is usefully invariant over period of observation, I wonder if it is also scale invariant. It has been suggested that neuroscience wanders among at least twelve orders of magnitude. Emergence and supervenience are terms invented to depict phenomena covering orders of magnitude. Could F give us a way to cross the great divide?

Is there evidence that challenges hifi as necessary for consciousness? It would seem that fugue states fit the bill. In a fugue state, a person appears to act normally to casual observers, but he or she has lost most awareness of personal history and identity. People afflicted with this strange disorder recover their memories spontaneously. It seems intuitively true that a person in a fugue state is less conscious than that person after recovering his history and identity. Would the F value differ for that person during and after fugue (4)?

Despite its conceptual interest I do not think that the theory of qualia presented herein explains much about consciousness per se. Linking nondecomposable qualia to trajectories through EIM space instantiated in the thalamocortical system seems to me no different in principle from describing the experience of Mozart’s Piano Concerto No. 21 by pointing to the score. Unless there is no deeper understanding, unless it is turtles all the way down, and by specifying firing complexes of neuronal groups through epochs of time we have told all that we can tell about consciousness.

© 2005 Henri Montandon M.D.
E-mail

An information integration theory of consciousness

Giulio Tononi in BMC Neuroscience 2004, 5:42

Abstract

BACKGROUND: Consciousness poses two main problems. The first is understanding the conditions that determine to what extent a system has conscious experience. For instance, why is our consciousness generated by certain parts of our brain, such as the thalamocortical system, and not by other parts, such as the cerebellum? And why are we conscious during wakefulness and much less so during dreamless sleep? The second problem is understanding the conditions that determine what kind of consciousness a system has. For example, why do specific parts of the brain contribute specific qualities to our conscious experience, such as vision and audition?

PRESENTATION OF THE HYPOTHESIS: This paper presents a theory about what consciousness is and how it can be measured. According to the theory, consciousness corresponds to the capacity of a system to integrate information. This claim is motivated by two key phenomenological properties of consciousness: differentiation – the availability of a very large number of conscious experiences; and integration – the unity of each such experience. The theory states that the quantity of consciousness available to a system can be measured as the Phi value of a complex of elements. Phi is the amount of causally effective information that can be integrated across the informational weakest link of a subset of elements. A complex is a subset of elements with Phi>0 that is not part of a subset of higher Phi. The theory also claims that the quality of consciousness is determined by the informational relationships among the elements of a complex, which are specified by the values of effective information among them. Finally, each particular conscious experience is specified by the value, at any given time, of the variables mediating informational interactions among the elements of a complex.

TESTING THE HYPOTHESIS: The information integration theory accounts, in a principled manner, for several neurobiological observations concerning consciousness. As shown here, these include the association of consciousness with certain neural systems rather than with others; the fact that neural processes underlying consciousness can influence or be influenced by neural processes that remain unconscious; the reduction of consciousness during dreamless sleep and generalized seizures; and the time requirements on neural interactions that support consciousness.

IMPLICATIONS OF THE HYPOTHESIS: The theory entails that consciousness is a fundamental quantity, that it is graded, that it is present in infants and animals, and that it should be possible to build conscious artifacts.

Full article (HTML) and PDF.

End Notes

(1) ”Brain theory” in the sense articulated by Arbib: “Brain theory is centered on “computational neuroscience,” the use of computational techniques to model biological neural networks, but also includes attempts to understand the brain and its function through a variety of theoretical constructs and computer analogies.”

(2) A system with a finite number of elements has a finite number of interactions, even if that number is exponentially large. If the elements are finite and distinguishable, and an element can only interact with one other element (including itself), then the number of possible interactions is given by 2N where N = the number of elements.

For example, given a system with elements A and B, the possible interactions are:

  • A ˆ B (ex: 22 = 4)
  • B ˆ A
  • A ˆ A
  • B ˆ B

But what of systems with very large numbers of elements? In a paper published in 2001 Mike Holderness analyzes the number of possible states a human brain is capable of. The number he finds, 107 x 10^13, or, more conventionally, 1070,000,000,000,000, is called the Holderness number.

(3) The recipe for the calculation of F is as follows:

  1. Create a set X of n elements.
      • n elements could be any number of anything at all. But for our purposes, it is helpful to think of them as neurons, or neuronal columns.
      1. Pick out 4 elements from X and call this subset S.
      2. Partition subset S into two areas, A and B.
          • Partition A contains elements 1 and 3; partition B contains elements 2 and 4. (notationally: {1,3}/{2,4} )
          • Partitioning S in effect makes one partition the “observer” and the other the “observed”, or, in information theoretic terms, one partition is the sender and the other the receiver. This method will lead to the F metric being “observer independent”, because the only information present is in the states of the system able to be discriminated within the system.
          1. Use Mutual Information (MI) to measure the amount of information shared between input elements and output elements. (Shannon’s noisy channel coding theorem.)
            • Variables of elements 1,2,3,4 must be discrete and random.
            • Given 4.1, then we have the standard formula for MI: I(A:B) = H(A) – H(A|B) = H(B) – H(B|A)
          2. MI can be generalized to give a measure, Effective Information (EI) which is the Mutual Information between each element of S summed over all possible bipartitions: EI(S) =
          3. Define a “direction of effect” among the four elements.
            • In Tononi’s discussion, 1 effects 2 (unidirectional).
            • 3 effects 4 and 4 effects 3 (bidirectional)
            • Keep in mind there are other, unspecified, connections from elements in X to elements in S.

              • Measure the response of the elements in B to the random signal from A.
              • Let elements in B give out a random signal to A.
              • Measure the response of the elements in A to the random signal from B. 6.4 ## Effective Information (EI) of the subset S is then defined as the response of A plus the responses of B.
                • EI(A B) = EI(A B) + EI(B A)
                • Note that the value EI for 1,3}/{2,4} is POSITIVE.
                • Calculate the EI value for a bipartition where the elements in A are 1,2 and the elements of B are 3,4.
                • Note that the value here is EI = 0. This is a minimum information bipartition (MIB) for S. Call the minimum information bipartition F.
                • Calculate the EI for all other partitions of S.
                • All other bipartitions have an EI > 0. 7.0 Within subset S, we have now explored all possible 4 element sets formed by bipartition.
              • Note that there are a number of bipartitions where F > 0 that are not included in a subset with a higher F value. Call these the complexes of subset S.
                • The complex with the highest F of subset S is called the main complex.
            1. Complexes are the only structures of X which can be said to integrate information.

            (4) Perhaps the most renowned victim of fugue is the Reverend Mr. Ansel Bourne, whose story William James chronicles in The Principles of Psychology.

            On January 17, 1887, he drew 551 dollars from a bank in Providence with which to pay for a certain lot of land in Greene, paid certain bills, and got into a Pawtucket horsecar. This is the last incident which he remembers. He did not return home that day, and nothing was heard of him for two months. On the morning of March 14th, however, at Norristown, Pennsylvania, a man calling himself A. J. Brown, who had rented a small shop six weeks previously, stocked it with stationery, confectionery, fruit and small articles, and carried on his quiet trade without seeming to any one unnatural or eccentric, woke up in a fright and called the people of the house to tell him where he was. He said that his name was Ansel Bourne, that he was entirely ignorant of Norristown, that he knew nothing of shopkeeping, and that the last thing he remembered—it seemed only yesterday—was drawing the money from the bank, etc., in Providence. He would not believe that two months had elapsed.


thomasr

 


3 Comments


  1.  

    Wallace Stevens` 26 sylable summary says nothing other than Consciousness is expressing itself in the flight of the blackbird, the shadow of the blackbird, and the mood. Consciousness created all of these things. Consciouisness created the brain of the blackbird.




  2.  

    The correct equation is I(A:B) = H(A) – H(A|B) = H(B) – H(B|A)





Leave a Response