Skip to main content

What Is the Causal Emergence Theory?

Leonard Kelley holds a bachelor's in physics with a minor in mathematics. He loves the academic world and strives to constantly explore it.

For me, one of the most amazing scientific concepts is that of emergence, or where an ability arises from a collection of objects or behaviors. The shape of mountains, the patterns of a snowflake, even the mind can all be considered emergent examples. But recent work in integrated information theory (IIT) shows how sometimes large-scale objects themselves can demonstrate emergent behavior upon small objects.

This causal emergence is still in its infancy and will develop in time, but its implications are staggering: the large-scale world may indeed have an impact over the small.

Reducing Elements

Normal physics has had an excellent track record with the reductionist approach to science, where we break up phenomena we wish to study into smaller and smaller pieces and then build up our knowledge from there. This bottom-up approach is how large-scale features can crop up and create structures that demonstrate behavior that the individual pieces do not.

It's key to how the world around us works. But it leaves something to be desired when it comes to us human beings. Are we really only the small pieces that build up to make us? Does a collection of atoms really make up that which is the perceived self?

Searching Through the Fog

Erik Hoel, a neuroscientist, doesn’t think so. In fact, he has developed a mathematical model that addresses “how consciousness and agency arise” from the collection of atoms that is us and calls it causal emergence. Co-developed with Larissa Albantakis and Giulio Tononi, the theory employs techniques from information theory to demonstrate that how we parse out pieces of reality can lead to new causes emerging from macroscopic objects.

These larger “coarse-grained macroscopic states of a physical system…can have more causal power over the system’s future than a more detailed, fine-grained description of the system possibly could.” This of course fits with our intuition, that we do control more than what we are made of (Wolchover, Dewhurst).

Hoel and his team have been cracking down on this theory since 2013, with a major breakthrough occurring in May of 2017 when they were able to make it more of a theory than an idea “by showing that macro scales gain causal power in exactly the same way, mathematically, that error-correcting codes increase the amount of information that can be send over information channels.”

These codes act by making the uncertainty in the data as minimal as possible, and the argument is that macro-scale objects can do the same thing to their own causal structure, “strengthening causal relationships and making the system’s behavior more deterministic.” By enforcing our will over a system, we are making it more orderly! (Wolchover)

Building Up Confidence?

This is of course a very exciting prospect for neuroscientists everywhere but there are also potential applications for other emergent behavior like superconductivity, topological phases of matter, bird flock patterns, crystals, waves, and more. However, many question the results of the work because of the success of the reductionist route.

For more scientists, causes all arise from the basic building blocks of reality and “ripple out from there.” We need to build in complexity, as it were. But if we were to have both bottom-up and top-bottom approaches, then how can you truly know what is impacting what? Won’t cause and effect get all jumbled up? That is why reduction approaches evoke the exclusion argument, saying that “all causal power must originate at the micro level” (Ibid).

But this doesn’t mean it’s the easiest way to discuss agency. We would rather talk about cause and effect via a macro level means because…well, often it’s a matter of accounting for processes and their chains of effects. Would we really want to go to the particle level to explain every single large-scale event? And how does one decide which microscale is enough? Or how much one thing can lead to a large-scale event? (Ibid)

This is ironically enough a critique of IIT, upon which causal emergence gains a foothold. One of its abilities to measure a system's capacity for integrated information transference does depend on how you partition a system. You group elements into two separate groups, feed noise into one and see how the other responds. After partitioning into all possible subsets like this, you then can gain a feel for what partition is the most effective at reducing entropy.

Hoel and his team delved deeper into this scaling issue, hoping “to figure out which ensemble size of neurons might be associated with maximum integrated information – and thus, possibly, with conscious thoughts and decisions" (Wolchover, Dewhurst).

Scroll to Continue

Read More From Owlcation

It became clear to them that if you want to show how consciousness arises at a macro level then you have to find a way to quantify how much a brain state can have causal power over its situation. Using causal calculus along with IIT, a new metric known as effective information (separate from the version established in IIT – I know, it can be confusing) was developed, indicating “how effective a particular state influences the future state of a system."

Using this effective information, it was possible to show that the power of neuron groups grows the more you treat them as macroscopic entities instead of microscopic entities building up in behavior. These possible states then form a causal structure as they interact with each other, and the transition from one to another can be modeled using transition probability matrices. With these, column values represent current states, and the row values represent the next possible state. Each value is a probability of one to the other and so can be represented with any value from 0 to 1.

The larger the value of an element, the greater confidence we have in its likelihood of happening and so its effective information is high. But differentiation can also be an indicator of effective information, so if each element is the same value, then the effective information is 0, because it means everything has the same chance of happening. It’s a random setup (Wolchover, Dewhurst).

Depending on how we break up our system, we could have a huge matrix of many elements with low probabilities, or we could pair it down to a 2x2 Identity matrix, whose effective information would be 1. With the microscale viewpoint, each element going to the next state is a low probability because of the vast number of choices available, but if we group into a macroscale event, then for sure we have much better odds of hitting the grouping we have made.

And depending on the macroscale chosen, the effective information reaches its highest possible value, indicating that our system has the greatest causal power, “predicting future states in the most reliable, effective manner.” This is because if you were to zoom in any further, you lose how the information is related to each other, essentially not seeing the forest through the trees (Wolchover, Dewhurst).

The error-correcting methods from earlier come into play here because we clearly need to have certain processes occur if we as humans are to function. The basic scale of neurons has problems with randomness as well as high redundancy built into it and if we need a chain to fire a certain way, the reductionist method has trouble accounting for why it happens.

But if we exert causal emergence upon it, then it benefits us to treat it as a larger object, reducing the errors built into it. Certainly, this is much simpler than trying to determine the future state of every atom in your body (Wolchover, Dewhurst).

Future States

Currently, causal emergence remains just a theory, but tests are being devised to see its merit. Brain scans are being planned to see if these macroscopic scales of causal emergence can be spotted. Some still object to causal emergence because of its domination over proven reductionist methods and remain unsure if it can be applied to neuronal functions.

Others question the use of effective information as a causal indicator. It should seem to be, because if it measures high then we know how we feed inputs into one set greatly impacts the next, indicating causality (Wolchover, Dewhurst).

But could you argue that such a macro scaling has to somehow impact the micro, therefore really muddying what is truly happening? Certainly, some macroscales are more useful to us, we don’t need such fine details to appreciate some of the things we see happening around us.

What we really need to do is somehow establish when a macro scaling grouping predicts something that the micro scaling simply cannot account for, essentially demonstrating somehow that a macro event impacts the micro without any way for the micro to account for it (Wolchover, Dewhurst).

And good luck with that!

Works Cited

Dewhurst, Joe. “Causal Emergence and Real Patterns.”

Wolchover, Natalie. “A Theory of Reality as More Than the Sum of Its Parts.” Quantamagazine.org. Quanta, 01 Jun. 2017. Web. 10 May 2021.

This content is accurate and true to the best of the author’s knowledge and is not meant to substitute for formal and individualized advice from a qualified professional.

© 2022 Leonard Kelley

Related Articles