Emergence and its relation to consciousness has been discussed in minute detail by many analytic philosophers. So it’s always interesting to read the point of view of a physicist on this subject. The (former) physicist in this case is James Trefil. However, in order to arrive at consciousness, Trefil firstly discusses the addition of a single grain of sand to a sandpile, and how that can cause an avalanche. That’s deemed to be an emergent property of the sandpile. In order to make sense of all this, though, we need to make the important distinction between epistemic emergence and ontological emergence.

It’s not surprising that a scientist like James Trefil believes that emergence is no problem for science. Indeed, he believes that
“it is important to realize that by posing the question in terms of emergent properties, we are avoiding the necessity of having to go outside the realm of science to find an answer”.
So not only is emergence deemed not to be a problem for science: posing the question in terms of emergent properties actually helps science find answers. Where else would we go anyway? Even if there are new laws or properties to be discovered, it would still be science, not philosophy or religion, which discovered them.
Epistemic Emergence
One important distinction which has to me made here is between epistemic emergence and ontological emergence. (This distinction more or less captures the distinction made between weak emergence and strong emergence.) In basic terms, some systems are so complex that they display (or instantiate) properties which go beyond our present understanding. This is how Trefil puts it:
“[I]t might turn out that when you put a sufficiently complex system together, you will be unable to predict what its properties are in practice because the connection between the individual parts and the final behaviour is too complicated to know.”
This is a case of epistemic emergence. It’s also interesting to note that Trefil was discussing an artificial, rather than a biological, system. Yet this may be a difference that doesn’t matter too much in this particular debate. Trefil also focusses on prediction in that even an artificial system may display properties that its creators didn’t foresee. (In Trefil’s own words, “this one speaks primarily to the question of whether we could understand a complex machine once it was built”.) This may seem intuitively odd to many people in that they may expect the creators to know in advance all of their systems properties and behaviours. (These details may point to Large Language Models and AI entities generally displaying emergent properties and behaviours.)
The reproductive biologist Jack Cohen and the mathematician Ian Stewart broach the same subject of epistemic emergence from a slightly different angle. Their point is that reductionism may fail in certain cases. Cohen and Stewart (as quoted by Trefil) write:
“‘If we wish to use reductionist rules to explain and understand high level structures, then we have to follow [a] chain of deduction. If that chain becomes too long, our brains begin to lose track of the explanation, and it ceases to be one. This is how emergent phenomena arise.’”
Prima facie, it seems that Cohen and Stewart are talking about pure logic when they speak in terms of having to “follow a chain of deduction”. Surely there must be more to it than that.
Anyway, the epistemic angle is well characterised when they say that “[i]f the chain becomes too long, our brains begin to lose track of the explanation”. So nothing weird or spooky is happening here. It’s essentially about our cognitive limitations and why they lead to a lack of understanding. In that sense, new phenomena emerge as a result of our cognitive limitations.
All this raises an interesting possibility which Trefil himself latches onto. What about what Trefil calls a “Divine Calculator”? This would be “a being with sufficient calculational abilities” to tell us everything about the system deemed to have emergent properties. Does this assume it would all (or only) be a matter of mathematical calculation? (Earlier it was hinted that Cohen and Stewart saw it all in terms of deduction.)
A Grain of Sand, A Sandpile and an Avalanche
Does a sandpile instantiate emergent properties?
Yes!
This is Trefil on that subject:
“The more sand grains you pile on, the more complex the web of forces becomes. Eventually, you add one more grain of sand and an avalanche flows down the side of the pile. In other words, the avalanche is a behavior that shows up only when the web of forces reaches a certain level of complexity. If you have to have a million grains of sand before you see an avalanche, you don’t get one-millionth of an avalanche in a single grain of sand.”
It may seem odd to some readers that a sandpile can display emergent properties. These properties are only epistemically emergent — so not really odd at all. In this case, the addition of a single grain of sand to a sandpile causes the avalanche. This is what Trefil calls a “discontinuous change”. It’s discontinuous because the addition of a grain of sand before the final one didn’t cause an avalanche. This may seem mildly contradictory. On the one hand, Trefil stresses complexity, on the other hand, a single grain of sand makes all the difference… However, that single grain of sand was added to the already-existing complexity (or “web of forces”) of the sandpile.
The avalanche can be summed up this way:
one added grain of sand + the complexity of the sandpile = avalanche
Trefil states that “this pattern of successive discontinuous changes is common in natural systems”. He then offers readers another example of discontinuous change. Trefil cites the “many steps between smooth flow and full turbulence in the flow of water”.
Consciousness and Emergence
This essay asks about whether consciousness is emergent. Trefil takes an incredibly simple position on this. To him, it’s all about complexity. More relevantly, the more complex the brain became (in evolutionary terms), the more emergent properties arose. In Trefil’s own words:
“[I]f we build a collection of neurons, adding one neuron at a time, the system will go through a series of discontinuous jumps, each jump corresponding to a new kind of emergent property — a new ‘avalanche’ — characteristic of its new level of complexity.”
Put so simply, it’s hard to know why a process of simple addition would have such results. However, it’s not actually that simple. That’s because neurons in and of themselves are complex entities. Added to that are the myriad of interrelations between neurons, as well as the importance of neurochemistry and brain structure. Thus, it’s not simply about 1, then 1 + 1, then 1+1+1… We aren’t actually talking about numbers: we’re talking about neurons.
Now let’s compare a grain of sand to a neuron at an even lower level. In terms of a single neuron, it “can generate an action potential, of course, but in the absence of other neurons there is nothing to which that potential can be communicated”. A single grain of sand is even more basic than that… Yet it caused an avalanche.
All this leads to a more relevant conclusion. Trefil continues:
“The kinds of phenomena we refer to as consciousness and intelligence, in this picture, correspond to emergent properties at the higher levels of the cascade.”
The philosopher Mark A. Bedau has said that “the notion of weak emergence is metaphysically benign”. On the other hand, he puts the argument for suspicion of strong emergence when he wrote the following:
“Although strong emergence is logically possible, it is uncomfortably like magic. How does an irreducible but supervenient downward causal power arise, since by definition it cannot be due to the aggregation of the micro-level potentialities? Such causal powers would be quite unlike anything within our scientific ken. This not only indicates how they will discomfort reasonable forms of materialism. Their mysteriousness will only heighten the traditional worry that emergence entails illegitimately getting something from nothing.”
Let’s remind ourselves that weak emergence is doing the work of epistemic emergence, and strong emergence is doing the work of ontological emergence.
Being a philosopher and not a physicist, it’s perhaps not surprising that Bedau spots a problem which Trefil doesn’t even mention: downward causation. In terms of Trefil’s position, how do his emergent properties causally impact on lower-level properties? However, since Trefil’s position is only one of epistemic emergence, there’s no reason to believe that higher-level properties can’t causally impact on lower-level properties. This is only a problem if the higher-level properties are deemed to be “irreducible but supervenient”. Does that mean that they’re ontologically emergent too? Are they epistemically or ontologically irreducible? And what about supervenience? Doesn’t this notion hint even more strongly at ontological emergence?
Trefil ties his position on emergence and discontinuous changes to evolution’s role in the creation of consciousness and higher-level mental functions. He writes:
“If the appearance of Homo erectus corresponded to the collection of neurons we call the brain reaching the point where new emergent properties became evident, we can understand how such a sudden change could have occurred.”
Many people stress that evolution is a very slow process. Yet that’s not always the case. In the case of consciousness and other properties of human persons, it might well have been a case of discontinuous changes — sudden changes. How sudden is sudden? Well, in the case of the final grain of sand mentioned earlier, the change that was an avalanche was very sudden. So does this example pass over to evolution and the rise of consciousness?
Panpsychism and Complexity
One way of looking at emergence being a result of complexity is to think about the philosophical position of panpsychism. In panpsychism, consciousness (or experience) has little to do with complexity. That’s because panpsychists believe that consciousness exists all the way down the line. If everything is conscious to some degree, then complexity isn’t an issue at all. Indeed, the philosopher David Chalmers makes this explicit when he plays up simplicity and plays down complexity. For example, Chalmers writes that
“one wonders how relevant this whiff of complexity will ultimately be to the arguments about consciousness”.
Chalmers goes further when he says that
“[o]nce a model with five units, say, is to be regarded as a model of consciousness, surely a model with one unit will also yield some insight”.
Is one “unit” one neuron? One molecule? One atom?… One transistor?
Chalmers also makes what seems to be an obvious point (at least it seems obvious if one already accepts his information/experience link). He writes:
“Surely, somewhere on the continuum between systems with rich and complex conscious experience and systems with no experience at all, there are systems with simple conscious experience. A model with superposition of information seems to be more than we need — why, after all, should not the simplest cases involve information experienced discretely?”
Yes, panpsychists attempt to get rid of the problem of emergence, at least as it applies to consciousness or experience. In Chalmers’ picture, we don’t have emergence because each unit (or simple case) instantiates experience to some degree. Despite that, panpsychism still allows for radical differences between systems with simple conscious experience, and the conscious experience of human persons and animals. Yet if complexity isn’t important, then why do we have any differences whatsoever between simple conscious units and conscious complex systems? Is this another variation of the combination problem or what can be called the additive problem? After all, this is a case in which units of simple conscious experience are simply added together to create a complex conscious system.
Chalmers gives a biological (or “real life”) example of this phenomenon too when he writes the following:
“We might imagine a traumatized creature that is blind to every other distinction to which humans are normally sensitive, but which can still experience hot and cold. Despite the lack of superposition, this experience would still qualify as a phenomenology.”
What we have here is subtraction, rather than addition. This traumatized creature has had all its complex and/or varied conscious experiences destroyed to be left with only the ability to experience hot and cold. (We can work in the other direction and add units too.)
Despite all that, it does seem obvious that complexity matters. After all, many scientists and theorists have made a strong link between the complexity of the brain and consciousness. Chalmers himself acknowledges the (intuitive) appeal of complexity when he continues:
“After all, does it not seem that this rich superposition of information is an inessential element of consciousness?”
Chalmers then rejects this requirement for complexity.
If we return to the sandpile. We don’t need the individual grain of sand (or unit) to have (to use Trefil’s words) “one-millionth of an avalanche” somehow within it to make sense of the avalanche. All we need is complexity plus the addition of a single grain of sand. In that sense, the final grain of sand is just like all the others.
What about a single neuron?
To the panpsychist, one single neuron instantiates some degree of consciousness. (The “parts” of that single neuron do so too.) Thus, the addition of neurons to a brain never cause an avalanche because consciousness was there all along.
No comments:
Post a Comment