What Is the Integrated Information Theory or IIT?

Leonard Kelley holds a bachelor's in physics with a minor in mathematics. He loves the academic world and strives to constantly explore it.

Is Information the Key to the Mystery of Consciousness?

Information is a powerful thing, but is it the key to the mystery of consciousness? It's possible, according to the Integrated Information Theory (IIT), first pioneered by Giulio Tononi (University of Wisconsin-Madison).

He was hoping to understand why certain pieces of the brain give conscious experiences while others don’t, as well as how the brain gives sensory data the quality we subjectively assign it. Math has a good track record with the natural sciences, so Tononi wanted to see it was the key to consciousness too (Koch, Brooks 40-1, Tononi, Barrett).

He started with two central main tenants, or axioms, to develop his theory. The first is that conscious states have a lot of variety, and that the information in these states is tied to different pieces and cannot be broken down once in the mind. With a breakdown of connections like when we fall asleep or are placed under anesthesia, consciousness disappears. So, when the mind is fragmented, we lose consciousness (Ibid).

So, if one is to be conscious then “you need to be a single, integrated entity with a large repertoire of highly differentiated states.” This would explain why something like a computer hard drive isn’t conscious, for while it does have a lot of data, it isn’t integrated very well but is instead a bunch of discrete bundles. Increasing the level of integration leads to a higher degree of consciousness. Decrease those connections, and there goes consciousness (Ibid).

To “denote the size of the conscious repertoire associated with any network of causally interacting parts”, we use the Greek letter Phi. It’s a way to see how a system integrates its information based on its structure, a correspondence to “the feedback between and interdependence of different parts of a system" (Koch, Brooks 42, Horgar, Tonon).

How that information is conveyed from one grouping to another goes into calculating Phi, and so we have to choose our complex carefully. That is because we need to do a comparison of all non-zero Phi values, as well as increasingly larger values of it as our structures grow in integrative ways (Ibidi).

That is essentially the exclusion principle, where consciousness only occurs when we are at a maximum Phi. The degree of consciousness experienced for the whole must be greater than any single piece. This means a system can have many different processes going on at once but only one cumulative Phi will be measured, and it will be its maximal if done correctly (Brooks 42, Horgar, Tononi).

This seems to possibly hint at a conscious structure having little mini brains within it, but that the overarching nature of consciousness overrides them. That does explain why we don’t think of experiences as discrete little packets of data. Something can have lots of ways to collect data but if it cannot be effective with that information then its Phi is rather low (Ibid).

Our brain has much interdependence of the information it is fed, and so the quality of consciousness would be based on how information is related to the elements of our complex and of effective said information is. The actual conscious experience will reflect this (Ibid).

Bring in the Math

So, consciousness needs to somehow account for this integrative ability over the largest set of independent information states. You could just argue that entropy would provide the maximum states allowable, but it doesn’t consider the integrating of the information, jus the number of available states. Instead, we need to examine the number of independent states of some basic unit for which information cannot be integrated. Then we could build from these elements to create something that could integrate information (Tononi).

For our purposes, neural groupings could be a basic state, with different firings of them giving our states different values. Breaking up the neuron groupings into subsets, we can map out the relationships amongst the elements and see to what degree the information is integrated amongst all causally linked subsets (Ibid).

To do this, we slip the subset into two pairs A and B, find all the B responses that can come from inputs of A, and find all the possible states generated by A and how varied those responses are. Simple, right? Well, you will have to test all possible splittings (Ibid).

Let’s delve into some definitions, then, to clarify this process. We denote AHmax as the “maximum entropy to the outputs from A” and using random noise as firings from A we can find the subsequent entropy of B based on A. EI(A ->B) is the effective information between A and B, and to measure it we note that EI(A->B) = MI(AHmax;B), where MI(A;B) would be mutual information. Therefore, MI(AHmax;B), would be “a measure of the entropy or information shared between a source A and a target B" (Ibid).

In our case, because A will be independent noise data, nothing causally exists going form B to A, only A to B. Also note that EI(A->B) is an indication of “all possible effects of A on B” and doesn’t necessarily reflect what would happen if everything was operating under normal conditions. It’s about potential states as a maximum boundary, with EI(A->B) being “bounded by AHmax or BHmax, whichever is less.” If the links between A and B are “strong and specialized”, then EI(A->B) will be high because our A input can give wide variety to give us our B outputs. But if little to no effects of A onto B then EI(A->B) will be small (Ibid).

Do this same process with B and A flipped and you can arrive at the bipartition EI(A <->B) = EI(A->B) + EI(B->A). That bipartition interaction is key to seeing the level of integration between the elements. If EI(A<->B) = 0 then we haven’t split up into furthest subsets to get the true integrative picture, because it would imply all the links back and forth use information the same way (which means we haven’t specialized our splitting up well enough).If we search for the splitting up of our set into A and B such that EI(A<->B) at its lowest non-zero value, we have found the integrating capacity for our set (Ibid).

This leads to MIBA <-> B of our subset as the minimum information bipartition and “is the bipartition for which the normalized effective information reaches a minimum, corresponding to min {EI(A <-> B/Hmax(A <-> B)} so either the effective information is small, or the largest amount of entropy is large is we want to have a minimum MIB (Ibid).

Finally, we have reached the formal definition of Phi(S) as the information integration, and “is simply the (non-normalized) value of EI (A <->B) for the minimum information bipartition” or Phi(S) = EI(MIBA <-> B). By finding the smallest amount of information that can be integrated across the bipartitions you can gain a measure of the complexity of the system, and from there hopefully make conclusions about consciousness (Ibid).

But how do you find the actual subsets that are integrating the information? And how much information is it? To find this, “we consider every possible subset S of m elements out of the n elements of a system, starting with subsets of m=2 and ending with a subset corresponding to the entire system m=n.” We find Phi for each of these and then rank them based on that value. Finally, we get rid of any subsets that are included in larger subsets with higher Phi values (Ibid).

What remains are out complexes, or “individual entities that can integrate information.” If many complexes are present, we label the one with the greatest Phi value the main complex. Things outside the complex that connect to it are known as port-ins and port-outs. Also, elements can belong to many complexes, which themselves can overlap and yet remain distinct for their integrative ability (Ibid).

Mysteries Solved

If IIT is to gain any headway into acceptance, it first of all needs to explain some mysterious features of the brain, and it offers some potential solutions. One of these is why the cerebellum has more neurons than the cerebral cortex yet isn’t critical to consciousness. Neuron count should equate to levels of consciousness, right? (Koch, Brookes 42, Tononi)

Well, IIT would show how the neurons in the cerebellum are not integrated as a whole network and so will have a low Phi value, leading to low levels of consciousness. Areas of the brain with small numbers of highly grouped neurons should, if damaged, severely impact consciousness, and research has shown this to be true (Ibid).

Take for example the thalamocortical network. Defects and damage to the thalamocortical areas hinder consciousness despite much conscious activity having been spotted over several cortical areas. This is because the thalamocortical area of brain is well designed to be an IIT complex (Tononi).

With respect to integrated information, that brain area “comprises a large number of elements that are functionally specialized, becoming activated in different circumstances” and “are linked by an extended network of intra- and inter-areal connections that permit rapid and effective interactions withing and between areas.” This means that neuron groupings in this area of the brain are set to impose chances to distant, indirectly connected areas (Ibid).

What about dreams? They are oftentimes hard to remember yet involve our senses, matching brain activity as if we were awake. But the brain activity has “slow, large, and highly synchronized waves” during our rest, so the information is broken up as it travels across the brain and so isn’t integrated well (Koch, Brooks 43, Tononi).

This may also tie into how during anesthesia we lose consciousness. While it too involves a lack of integrated information because of disruptions, in this case it ties into the thalamus’ reduced firing pattern and the shutdown of the middle and parietal cortical regions. This all leads to disruptions “in large scale functional integration in the corticothalamic complex” (Ibid).

Panpsychism?

With all this talk of Phi, it has been noted how different systems of different levels of integration exist. This implies that some elements of our life are more conscious than we initially thought. Animals would now have different levels of consciousness…but so could things as small as subatomic particles (Koch, Brooks 40, Horgar, Azarian, Tononi, Barrett).

Consciousness may not be an all-or-none property but graded based on the Phi present. This can be seen as a weakness of the theory…unless you subscribe to panpsychism, or the idea that consciousness is a fundamental element of reality. Form wouldn’t really be as important as the information itself that is contained and integrated (Ibid).

The reason for not as much consciously aware constructs in the world would be the ability of the information to ne integrated in the system. In that regard, the form would be important, but it means nothing if the contents are meaningless. It’s all about “the potential differentiation of a system’s responses to all possible perturbations, yet it is undeniably actual” (Ibid).

Future Horizons

IIT offers many solutions to long-standing conflicts in science and philosophy, but it still needs work to tackle other issues. Does natural selection somehow involve developing creatures with higher Phi? Is it because higher intelligence allows or considerations which are evolutionarily advantageous, or is just really as simple as survival of the fittest? (Koch, Brooks 43, Azarian, Barrett).

Another issue is the practicality of IIT, for while theoretically its promising in actual practice its near impossible. We cannot currently calculate Phi beyond very simple systems, much less any complex lifeform. And an even further challenge is incorporating unconscious processes into IIT and seeing where their place is. And most of all, Phi is merely a tool for measuring consciousness, but does it really explain it? Does it offer how that information becomes an experience? (Ibid)

Field Integrated Information Hypothesis

A possible solution to these issues is to refine IIT using the common field theories of physics and rebrand as the field integrated information hypothesis, or FIIH. Proposed by Adam Barrett, FIIH conjectures that that “consciousness arises from information intrinsic to fundamental fields, and propose that, to move IIT forward, what is needed is a measure of intrinsic information applicable to the configuration of a continuous field" (Barrett).

This rationale arises from a field, or “an abstract mathematical entity, which assigns a mathematical object to every point in space and time” as the basic building block of reality. Each particle operates under a field, with fundamental particles having their own special field. While there are matter particles and force-carrying particles, each have associative fields to go with them and “all the forces of nature can be described by field theories which model interactions between fields.” Fields, fields, fields! So, if we want to make consciousness a fundamental property, it needs a field theory or some quantum particle. Based on electrical activity in the brain, it would seem those fields would be a good starting point (Ibid).

But can other fields still be responsible? And why would we even wonder if that is a possibility? Well, at one point in the universe, all the fields were one. If consciousness is a fundamental, it should have been there from the start (Ibid).

But more likely-than-not, consciousness is likely EM-derived “for reasons to do with the physics and chemistry of the electromagnetic field compared with other fields.” The strong and weak nuclear forces only act over very small distances, at the atomic scale for which we have no current evidence of conscious activity (Ibid).

Gravity fields are also removed by virtue of lacking complex structuring that would be needed for IIT to operate. But EM fields have large scale properties that make them suitable to our needs, plus they can be “both repulsive and attractive, and is fundamentally what enables non-trivial chemistry and biology” (Ibid).

Once we know we are working with EM fields, Barrett revamps IIT to be “consciousness arising from information intrinsic to the configuration of a fundamental field.” If said field has a lot of this intrinsic integrated information, “mathematically there is a high-dimensional informational structure associated with it” and based upon its geometrical features determines the “contents of consciousness" (Ibid).

To fit in with relativity, we need to have our fields be independent of frames of reference, so conventional Phi won’t do. This is because it is based on discrete elements in a system and not on continuous fields. Instead, we are seeking a formula which “int theory could be applied universally to explore the intrinsic information in any patch of spacetime, without requiring an observer to do any modeling" (Ibid).

Hence why the discreteness of IIT won’t do, because we could always zoom in and get a smaller scale (thankfully, once at Planck scale there isn’t complexity of IIT to be applied so the continuous nature applies to all relevant scales). This isn’t to say we can’t use discreteness to aid us. We should, using discrete observations to build an approximation of a continuous solutions. (Ibid).

Through this field approach, we can marry together consciousness with physics, potentially opening the door for new, brave unknowns…

Works Cited

• Azarian, Bobby. “Neuroscience’s New Consciousness theory is Spiritual.” Huffingtonpost.com. Oath, Inc. 21 Sept. 2015. Web. 18 Nov. 2020.
• Barrett, Adam B. “An Integration of integrated information theory with fundamental physics.” Front. Psychol., 04 February 2014 | https://doi.org/10.3389/fpsyg.2014.00063. Web. 06 Apr. 2021.
• Brooks, Michael. “There. There. Everywhere?” New Scientist. New Scientist Ltd., 02 May 2020. Print. 40-3.
• Horgar, John. “Can Integrated Information Theory Explain Consciousness?” scientificamerican.com. Nature America, Inc. 01 Dec. 2015. 18 Nov. 2020.
• Koch, Christof. “A ‘Complex’ Theory of Consciousness.” Scientificamerican.com. Springer Nature America, Inc. 01 Jul. 2009. Web. 09 Aug. 2020.
• Tononi, Giulio. “An information integration theory of consciousness.” BMC Neuroscience 2004, 5:42 doi:10.1186/1471-2202-5-42.

This content is accurate and true to the best of the author’s knowledge and is not meant to substitute for formal and individualized advice from a qualified professional.