Saturday, 4 April 2026

A Physicist on Consciousness and Emergence

 

Emergence and its relation to consciousness has been discussed in minute detail by many analytic philosophers. So it’s always interesting to read the point of view of a physicist on this subject. The (former) physicist in this case is James Trefil. However, in order to arrive at consciousness, Trefil firstly discusses the addition of a single grain of sand to a sandpile, and how that can cause an avalanche. That’s deemed to be an emergent property of the sandpile. In order to make sense of all this, though, we need to make the important distinction between epistemic emergence and ontological emergence.

Image by ChatGPT, under the prompts of the writer.

It’s not surprising that a scientist like James Trefil believes that emergence is no problem for science. Indeed, he believes that

“it is important to realize that by posing the question in terms of emergent properties, we are avoiding the necessity of having to go outside the realm of science to find an answer”.

So not only is emergence deemed not to be a problem for science: posing the question in terms of emergent properties actually helps science find answers. Where else would we go anyway? Even if there are new laws or properties to be discovered, it would still be science, not philosophy or religion, which discovered them.

Epistemic Emergence

One important distinction which has to me made here is between epistemic emergence and ontological emergence. (This distinction more or less captures the distinction made between weak emergence and strong emergence.) In basic terms, some systems are so complex that they display (or instantiate) properties which go beyond our present understanding. This is how Trefil puts it:

“[I]t might turn out that when you put a sufficiently complex system together, you will be unable to predict what its properties are in practice because the connection between the individual parts and the final behaviour is too complicated to know.”

This is a case of epistemic emergence. It’s also interesting to note that Trefil was discussing an artificial, rather than a biological, system. Yet this may be a difference that doesn’t matter too much in this particular debate. Trefil also focusses on prediction in that even an artificial system may display properties that its creators didn’t foresee. (In Trefil’s own words, “this one speaks primarily to the question of whether we could understand a complex machine once it was built”.) This may seem intuitively odd to many people in that they may expect the creators to know in advance all of their systems properties and behaviours. (These details may point to Large Language Models and AI entities generally displaying emergent properties and behaviours.)

The reproductive biologist Jack Cohen and the mathematician Ian Stewart broach the same subject of epistemic emergence from a slightly different angle. Their point is that reductionism may fail in certain cases. Cohen and Stewart (as quoted by Trefil) write:

“‘If we wish to use reductionist rules to explain and understand high level structures, then we have to follow [a] chain of deduction. If that chain becomes too long, our brains begin to lose track of the explanation, and it ceases to be one. This is how emergent phenomena arise.’”

Prima facie, it seems that Cohen and Stewart are talking about pure logic when they speak in terms of having to “follow a chain of deduction”. Surely there must be more to it than that.

Anyway, the epistemic angle is well characterised when they say that “[i]f the chain becomes too long, our brains begin to lose track of the explanation”. So nothing weird or spooky is happening here. It’s essentially about our cognitive limitations and why they lead to a lack of understanding. In that sense, new phenomena emerge as a result of our cognitive limitations.

All this raises an interesting possibility which Trefil himself latches onto. What about what Trefil calls a “Divine Calculator”? This would be “a being with sufficient calculational abilities” to tell us everything about the system deemed to have emergent properties. Does this assume it would all (or only) be a matter of mathematical calculation? (Earlier it was hinted that Cohen and Stewart saw it all in terms of deduction.)

A Grain of Sand, A Sandpile and an Avalanche

Does a sandpile instantiate emergent properties?

Yes!

This is Trefil on that subject:

“The more sand grains you pile on, the more complex the web of forces becomes. Eventually, you add one more grain of sand and an avalanche flows down the side of the pile. In other words, the avalanche is a behavior that shows up only when the web of forces reaches a certain level of complexity. If you have to have a million grains of sand before you see an avalanche, you don’t get one-millionth of an avalanche in a single grain of sand.”

It may seem odd to some readers that a sandpile can display emergent properties. These properties are only epistemically emergent — so not really odd at all. In this case, the addition of a single grain of sand to a sandpile causes the avalanche. This is what Trefil calls a “discontinuous change”. It’s discontinuous because the addition of a grain of sand before the final one didn’t cause an avalanche. This may seem mildly contradictory. On the one hand, Trefil stresses complexity, on the other hand, a single grain of sand makes all the difference… However, that single grain of sand was added to the already-existing complexity (or “web of forces”) of the sandpile.

The avalanche can be summed up this way:

one added grain of sand + the complexity of the sandpile = avalanche

Trefil states that “this pattern of successive discontinuous changes is common in natural systems”. He then offers readers another example of discontinuous change. Trefil cites the “many steps between smooth flow and full turbulence in the flow of water”.

Consciousness and Emergence

This essay asks about whether consciousness is emergent. Trefil takes an incredibly simple position on this. To him, it’s all about complexity. More relevantly, the more complex the brain became (in evolutionary terms), the more emergent properties arose. In Trefil’s own words:

“[I]f we build a collection of neurons, adding one neuron at a time, the system will go through a series of discontinuous jumps, each jump corresponding to a new kind of emergent property — a new ‘avalanche’ — characteristic of its new level of complexity.”

Put so simply, it’s hard to know why a process of simple addition would have such results. However, it’s not actually that simple. That’s because neurons in and of themselves are complex entities. Added to that are the myriad of interrelations between neurons, as well as the importance of neurochemistry and brain structure. Thus, it’s not simply about 1, then 1 + 1, then 1+1+1… We aren’t actually talking about numbers: we’re talking about neurons.

Now let’s compare a grain of sand to a neuron at an even lower level. In terms of a single neuron, it “can generate an action potential, of course, but in the absence of other neurons there is nothing to which that potential can be communicated”. A single grain of sand is even more basic than that… Yet it caused an avalanche.

All this leads to a more relevant conclusion. Trefil continues:

“The kinds of phenomena we refer to as consciousness and intelligence, in this picture, correspond to emergent properties at the higher levels of the cascade.”

The philosopher Mark A. Bedau has said that “the notion of weak emergence is metaphysically benign”. On the other hand, he puts the argument for suspicion of strong emergence when he wrote the following:

“Although strong emergence is logically possible, it is uncomfortably like magic. How does an irreducible but supervenient downward causal power arise, since by definition it cannot be due to the aggregation of the micro-level potentialities? Such causal powers would be quite unlike anything within our scientific ken. This not only indicates how they will discomfort reasonable forms of materialism. Their mysteriousness will only heighten the traditional worry that emergence entails illegitimately getting something from nothing.”

Let’s remind ourselves that weak emergence is doing the work of epistemic emergence, and strong emergence is doing the work of ontological emergence.

Being a philosopher and not a physicist, it’s perhaps not surprising that Bedau spots a problem which Trefil doesn’t even mention: downward causation. In terms of Trefil’s position, how do his emergent properties causally impact on lower-level properties? However, since Trefil’s position is only one of epistemic emergence, there’s no reason to believe that higher-level properties can’t causally impact on lower-level properties. This is only a problem if the higher-level properties are deemed to be “irreducible but supervenient”. Does that mean that they’re ontologically emergent too? Are they epistemically or ontologically irreducible? And what about supervenience? Doesn’t this notion hint even more strongly at ontological emergence?

Trefil ties his position on emergence and discontinuous changes to evolution’s role in the creation of consciousness and higher-level mental functions. He writes:

“If the appearance of Homo erectus corresponded to the collection of neurons we call the brain reaching the point where new emergent properties became evident, we can understand how such a sudden change could have occurred.”

Many people stress that evolution is a very slow process. Yet that’s not always the case. In the case of consciousness and other properties of human persons, it might well have been a case of discontinuous changes — sudden changes. How sudden is sudden? Well, in the case of the final grain of sand mentioned earlier, the change that was an avalanche was very sudden. So does this example pass over to evolution and the rise of consciousness?

Panpsychism and Complexity

One way of looking at emergence being a result of complexity is to think about the philosophical position of panpsychism. In panpsychism, consciousness (or experience) has little to do with complexity. That’s because panpsychists believe that consciousness exists all the way down the line. If everything is conscious to some degree, then complexity isn’t an issue at all. Indeed, the philosopher David Chalmers makes this explicit when he plays up simplicity and plays down complexity. For example, Chalmers writes that

“one wonders how relevant this whiff of complexity will ultimately be to the arguments about consciousness”.

Chalmers goes further when he says that

“[o]nce a model with five units, say, is to be regarded as a model of consciousness, surely a model with one unit will also yield some insight”.

Is one “unit” one neuron? One molecule? One atom?… One transistor?

Chalmers also makes what seems to be an obvious point (at least it seems obvious if one already accepts his information/experience link). He writes:

“Surely, somewhere on the continuum between systems with rich and complex conscious experience and systems with no experience at all, there are systems with simple conscious experience. A model with superposition of information seems to be more than we need — why, after all, should not the simplest cases involve information experienced discretely?”

Yes, panpsychists attempt to get rid of the problem of emergence, at least as it applies to consciousness or experience. In Chalmers’ picture, we don’t have emergence because each unit (or simple case) instantiates experience to some degree. Despite that, panpsychism still allows for radical differences between systems with simple conscious experience, and the conscious experience of human persons and animals. Yet if complexity isn’t important, then why do we have any differences whatsoever between simple conscious units and conscious complex systems? Is this another variation of the combination problem or what can be called the additive problem? After all, this is a case in which units of simple conscious experience are simply added together to create a complex conscious system.

Chalmers gives a biological (or “real life”) example of this phenomenon too when he writes the following:

“We might imagine a traumatized creature that is blind to every other distinction to which humans are normally sensitive, but which can still experience hot and cold. Despite the lack of superposition, this experience would still qualify as a phenomenology.”

What we have here is subtraction, rather than addition. This traumatized creature has had all its complex and/or varied conscious experiences destroyed to be left with only the ability to experience hot and cold. (We can work in the other direction and add units too.)

Despite all that, it does seem obvious that complexity matters. After all, many scientists and theorists have made a strong link between the complexity of the brain and consciousness. Chalmers himself acknowledges the (intuitive) appeal of complexity when he continues:

“After all, does it not seem that this rich superposition of information is an inessential element of consciousness?”

Chalmers then rejects this requirement for complexity.

If we return to the sandpile. We don’t need the individual grain of sand (or unit) to have (to use Trefil’s words) “one-millionth of an avalanche” somehow within it to make sense of the avalanche. All we need is complexity plus the addition of a single grain of sand. In that sense, the final grain of sand is just like all the others.

What about a single neuron?

To the panpsychist, one single neuron instantiates some degree of consciousness. (The “parts” of that single neuron do so too.) Thus, the addition of neurons to a brain never cause an avalanche because consciousness was there all along.

Sunday, 29 March 2026

Do We Mean Different Things By ‘Consciousness’ and ‘Intelligence’?

 

I came across a book called Are We Unique? which expresses views which I’ve been developing for some time. They writer is the physicist James Trefil. It nearly all amounts to the distinction we make between words (or Trefil’s “labels”) and things. Trefil, qua scientist, was involved in lots of pointless disputes about what consciousness and intelligence are, which often boiled down to “mere semantics”. As an alternative, Trefil suggested focussing on what machines and animals actually do (as well as on their “sets of attributes”), not on any labels we may wish to use. This approach isn’t fool proof, but it is worth pursuing.

Does Everyone Know What Consciousness Is?

Everyone knows what consciousness is? Right? After all, it’s a (according to Philip Goff) “first-person datum which we are more sure of than anything else”. However, James Trefil spots what should be an obvious problem here when he writes the following:

“The problem is that we all think [‘consciousness’] means something different.”

What’s more:

“Since everyone feels that he or she ‘owns’ the word [‘consciousness], enormous arguments ensue when people feel that their own ownership of the word is being threatened by someone else’s usage.”

Many readers will have noticed how convinced people are that they use the word ‘consciousness’ correctly, or that their own pet theory is correct. This applies to both laypersons and to philosophers. So, understandably, when someone else comes along and defines the same word in a different way, people do feel threatened. Of course, the amount of time people have spent thinking about consciousness will largely determine how strongly they feel about the correct usage.

This is when many people get mad with what they often call “mere semantics”. “It’s not about words: it’s about a thing.” Yet we name that thing with the same word even though other people mean different things when they use the same word. Thus, mere semantics seems to be a common sense approach — at least according to the physicist James Trefil.

Indeed, even in the “field of consciousness studies” we have “the appearance of words that most people think they understand, but which have widely different meanings for different people”. So this isn’t only about laypersons. In fact, philosophers are more likely to disagree with one another than laypersons when it comes to consciousness and intelligence. Laypersons will rarely disagree with each other simply because they’re never put on the spot as to what consciousness and intelligence are. So, in a sense, agreement is guaranteed.

It was no wonder that Trefil had a problem with the endless debates about “What is consciousness?” and “What is intelligence?”. He expressed that frustration in a story:

“Would you believe it if I told you that this group of professors and academics spent two hours in heated discussion, and in the end could not agree on a common definition of the word brain, never mind consciousness or anything else.”

Yet Trefil noted that they did agree on other things, such as what machines and animals do, as well as on the sets of attributes which are taken to constitute consciousness or intelligence.

Trefil then gave a reason for why there is such dispute among experts. He tackled the word “intelligence”. He stated that this word is “supposed to cover everything from an octopus to a human to a chess-playing computer like Deep Blue”.

Animal and Machine Doings

Take a machine that does many of the things human persons can do. Trefil wrote:

“If you were confronted with such a machine, it would be hard to argue that it wasn’t intelligent, or even conscious, no matter how you defined those terms.”

It’s clear here that many people believe that there’s something over and above behaviour and doings when it comes to consciousness, and even when it comes to intelligence. This also means that certain definitions may well rule out machines… by definition. What would those exclusive definitions be? What attributes would they refer to?

What about a bacterium?

Is a bacterium intelligent? Trefil says, Don’t ask that question. Instead, he tells us that a bacterium “swim[s] away from a chemical toxin”. Yes, and? Now can’t we simply ask the following question? — Is that a behaviour that at least partially exhibits intelligence? Trefil himself writes:

“If what we observe is behaviour, the question of whether that behaviour implies intelligence is one of interpretation and, in the end, semantics.”

Surely when we observe a bacterium swimming away from a toxin, all we are doing is observing a bacterium swimming away from a toxin. That behaviour, or that kind of behaviour, will surely need another term to account for it. Naked behaviour is never enough for any kind of scientist. Yet Trefil may still be correct to say that if we use the word “intelligence” to capture this specific behaviour, then this is a matter of interpretation and semantics. Actually, I’m not even sure if the word “interpretation” is right here. Isn’t a stipulative definition and decision made when it comes to deciding which kinds of behaviour can be classed as “intelligent”? Sure, the bacterium swimming away from a toxin either occurs or it doesn’t. However, and as stated, this kind of behaviour must be classed in some way.

Don’t Use the Word ‘Consciousness’

Trefil suggested not using the word “consciousness” at all. Not because he believed “consciousness doesn’t exist”, but for the reasons just given.

Firstly, Trefil stated that he “describes particular systems as accurately as I can”. Secondly, he “let the audience decide whether the word applies to that particular system”. To Trefil, what matters is “stat[ing] what the animal or machine can do”, and then,

“leave it to our audience to decide whether they want to apply the concept of intelligence or consciousness or self-awareness to something that possesses that particular set of attributes”.

Elsewhere in the same book, Trefil asks two questions:

“What would a machine have to do to be labelled ‘conscious’? For that matter, what would it take for us to call a chimpanzee ‘conscious’?”

The labelling comes after the fact. It comes after we discover what a machine or chimpanzee can do. In theory, everyone could agree on what a machine can do, yet hotly disagree on whether this constitutes being conscious.

The statement “what the animal or machine can do” sounds like a kind of behaviourism. It isn’t really. The behaviourists effectively factored out consciousness from the picture. Trefil, on the other hand, clearly didn’t want to do that. After all, he was gracious enough to allow his audience to apply the concept intelligence or consciousness… after the fact. Again, what was important to Trefil is what machines and animals do, as well as their “particular set of attributes”.

In Philosophese

According to the many philosophers, the notion of merely verbal dispute (or merely semantics) occurs partly because of the following.

Trefil stresses that all disputants — in broad terms — have access to the same behaviour and sets of attributes. However, they label such things in different ways. So let’s put this in philosophese.

Philosopher-scientist Smith has access to all the facts, laws, information, etc. about spatiotemporal slice A and says that it is x. Philosopher-scientist Jones has access to all the same facts about the same spatiotemporal slice A and says that it is a. Yet both Smith and Jones “agree on the facts”. This must mean that what Smith and Jones say about A is over and above the facts. In addition to facts, Smith and Jones needed to bring in theory, conceptual decisions, prior semantics, labels, etc. into the discussion.

The given facts may well be determinate. However, it doesn’t follow from this that everything we say about them is also determinate. Or, in another manner of speaking, the facts alone don’t entail what we say about them.

Some readers may now wonder how this clean and neat distinction between facts (or Trefil’s doings and sets of attributes) and what we say about them can be upheld. After all, aren’t the facts (or what we take to be the facts) themselves somewhat dependent on labels and what we say? (David Chalmers, for example, doesn’t only argue that what we say is indeterminate. He also argues that “the facts [themselves] are indeterminate”.)

Much of what’s just been said is fairly standard in science, and in the philosophy of science. The very same facts (or doings) about a bacterium, machine or animal may engender different theories or labels. Indeed, some philosophers have argued that the very same facts could bring about a (possibly) infinite amount of theories or labels. (This situation is called the underdetermination of theory by the data and has been widely discussed in philosophy.)

And here again we can question the clean and neat separation of empirical data from the theories and labels which, Trefil seems to suppose, may come later.

Trefil on Dennett’s Consciousness Explained

Just to show that Trefil, qua physicist, wasn’t an eliminativist when it comes to consciousness, take Trefil’s words on Daniel Dennett’s book Consciousness Explained. He wrote:

“The first time I read his book, I became confused because about halfway through I began to think, ‘Hey — this guy doesn’t think that consciousness exists.”

Perhaps Dennett didn’t actually believe that consciousness doesn’t exist. Instead, perhaps Dennett defined the word ‘consciousness’ in such a way that it made Trefil believe that he didn’t believe that consciousness exists. After all, this would be a valid application of Trefil’s own words and ideas on definitions and “labels”.

Oddly enough, Dennett’s notion of heterophenomenology is exactly the kind of approach which one would expect Trefil to by sympathetic to. In this approach, Dennett simply analysed what people said about their consciousness and subjective states, and ignored the possible ontology of consciousness, qualia and subjective states. (Dennett also factored in other kinds of behaviour, as well as bodily changes.) Yes, this is a scientific approach which is in tune with much that Trefil himself says in his book. Take the following words from Trefil:

“I will try to stick to descriptions of capabilities and leave labeling to you. It’s the only way I’ve found to keep things from getting bogged down in semantics.”

Now that isn’t exactly heterophenomenology, yet it’s still in the same ballpark. Trefil isn’t discussing the ontology of consciousness, only descriptions of capabilities.

Kinds of Intelligence

One solution Trefil offers to the semantic problem is to stress kinds of intelligence, rather than Platonic Intelligence. He even gives the kinds different names, such as “Intelligence I” and “Intelligence II”. Of course, the names Trefil chose were dependent on behaviour and observable attributes. His first example goes as follows:

“[H]umans are not very adept at paying attention to several things at once — think of the last time you were trying to eavesdrop on two separate conversations at a cocktail party.”

Readers may now expect Trefil to give a contrasting example from the animal world. Instead, he chooses an extraterrestrial. He concludes:

“An extra-terrestrial whose ancestors had found this particular trait useful might, in fact, conclude that humans were quite stupid because they couldn’t listen to four conversations and two bands at the same time.”

Trefil isn’t denying that humans can listen to music and write at the same time, drive a car and hold a conversation, etc. However, this extraterrestrial example seems to be on another level.

Trefil gives a more relevant and important reason for thinking in terms of kinds of intelligence. He wrote:

“[W]e suggested that while it is possible to build machines that are intelligent, or even conscious, we have to recognise that these words are being used differently when we apply them to a machine and to human beings. A chess-playing machine, for example, just doesn’t approach the game the way a human being does.”

There is a problem here. Even if there are kinds of intelligence, the word “intelligence” is still being used for all of them. Why? Couldn’t we use different words to describe different cases? If this alternative isn’t accepted, then surely that must be because all kinds of intelligence share something. Do they share intelligence? That’s not a good question. So they must share something else instead.