Sunday, 29 March 2026

Do We Mean Different Things By ‘Consciousness’ and ‘Intelligence’?

 

I came across a book called Are We Unique? which expresses views which I’ve been developing for some time. They writer is the physicist James Trefil. It nearly all amounts to the distinction we make between words (or Trefil’s “labels”) and things. Trefil, qua scientist, was involved in lots of pointless disputes about what consciousness and intelligence are, which often boiled down to “mere semantics”. As an alternative, Trefil suggested focussing on what machines and animals actually do (as well as on their “sets of attributes”), not on any labels we may wish to use. This approach isn’t fool proof, but it is worth pursuing.

Does Everyone Know What Consciousness Is?

Everyone knows what consciousness is? Right? After all, it’s a (according to Philip Goff) “first-person datum which we are more sure of than anything else”. However, James Trefil spots what should be an obvious problem here when he writes the following:

“The problem is that we all think [‘consciousness’] means something different.”

What’s more:

“Since everyone feels that he or she ‘owns’ the word [‘consciousness], enormous arguments ensue when people feel that their own ownership of the word is being threatened by someone else’s usage.”

Many readers will have noticed how convinced people are that they use the word ‘consciousness’ correctly, or that their own pet theory is correct. This applies to both laypersons and to philosophers. So, understandably, when someone else comes along and defines the same word in a different way, people do feel threatened. Of course, the amount of time people have spent thinking about consciousness will largely determine how strongly they feel about the correct usage.

This is when many people get mad with what they often call “mere semantics”. “It’s not about words: it’s about a thing.” Yet we name that thing with the same word even though other people mean different things when they use the same word. Thus, mere semantics seems to be a common sense approach — at least according to the physicist James Trefil.

Indeed, even in the “field of consciousness studies” we have “the appearance of words that most people think they understand, but which have widely different meanings for different people”. So this isn’t only about laypersons. In fact, philosophers are more likely to disagree with one another than laypersons when it comes to consciousness and intelligence. Laypersons will rarely disagree with each other simply because they’re never put on the spot as to what consciousness and intelligence are. So, in a sense, agreement is guaranteed.

It was no wonder that Trefil had a problem with the endless debates about “What is consciousness?” and “What is intelligence?”. He expressed that frustration in a story:

“Would you believe it if I told you that this group of professors and academics spent two hours in heated discussion, and in the end could not agree on a common definition of the word brain, never mind consciousness or anything else.”

Yet Trefil noted that they did agree on other things, such as what machines and animals do, as well as on the sets of attributes which are taken to constitute consciousness or intelligence.

Trefil then gave a reason for why there is such dispute among experts. He tackled the word “intelligence”. He stated that this word is “supposed to cover everything from an octopus to a human to a chess-playing computer like Deep Blue”.

Animal and Machine Doings

Take a machine that does many of the things human persons can do. Trefil wrote:

“If you were confronted with such a machine, it would be hard to argue that it wasn’t intelligent, or even conscious, no matter how you defined those terms.”

It’s clear here that many people believe that there’s something over and above behaviour and doings when it comes to consciousness, and even when it comes to intelligence. This also means that certain definitions may well rule out machines… by definition. What would those exclusive definitions be? What attributes would they refer to?

What about a bacterium?

Is a bacterium intelligent? Trefil says, Don’t ask that question. Instead, he tells us that a bacterium “swim[s] away from a chemical toxin”. Yes, and? Now can’t we simply ask the following question? — Is that a behaviour that at least partially exhibits intelligence? Trefil himself writes:

“If what we observe is behaviour, the question of whether that behaviour implies intelligence is one of interpretation and, in the end, semantics.”

Surely when we observe a bacterium swimming away from a toxin, all we are doing is observing a bacterium swimming away from a toxin. That behaviour, or that kind of behaviour, will surely need another term to account for it. Naked behaviour is never enough for any kind of scientist. Yet Trefil may still be correct to say that if we use the word “intelligence” to capture this specific behaviour, then this is a matter of interpretation and semantics. Actually, I’m not even sure if the word “interpretation” is right here. Isn’t a stipulative definition and decision made when it comes to deciding which kinds of behaviour can be classed as “intelligent”? Sure, the bacterium swimming away from a toxin either occurs or it doesn’t. However, and as stated, this kind of behaviour must be classed in some way.

Don’t Use the Word ‘Consciousness’

Trefil suggested not using the word “consciousness” at all. Not because he believed “consciousness doesn’t exist”, but for the reasons just given.

Firstly, Trefil stated that he “describes particular systems as accurately as I can”. Secondly, he “let the audience decide whether the word applies to that particular system”. To Trefil, what matters is “stat[ing] what the animal or machine can do”, and then,

“leave it to our audience to decide whether they want to apply the concept of intelligence or consciousness or self-awareness to something that possesses that particular set of attributes”.

Elsewhere in the same book, Trefil asks two questions:

“What would a machine have to do to be labelled ‘conscious’? For that matter, what would it take for us to call a chimpanzee ‘conscious’?”

The labelling comes after the fact. It comes after we discover what a machine or chimpanzee can do. In theory, everyone could agree on what a machine can do, yet hotly disagree on whether this constitutes being conscious.

The statement “what the animal or machine can do” sounds like a kind of behaviourism. It isn’t really. The behaviourists effectively factored out consciousness from the picture. Trefil, on the other hand, clearly didn’t want to do that. After all, he was gracious enough to allow his audience to apply the concept intelligence or consciousness… after the fact. Again, what was important to Trefil is what machines and animals do, as well as their “particular set of attributes”.

In Philosophese

According to the many philosophers, the notion of merely verbal dispute (or merely semantics) occurs partly because of the following.

Trefil stresses that all disputants — in broad terms — have access to the same behaviour and sets of attributes. However, they label such things in different ways. So let’s put this in philosophese.

Philosopher-scientist Smith has access to all the facts, laws, information, etc. about spatiotemporal slice A and says that it is x. Philosopher-scientist Jones has access to all the same facts about the same spatiotemporal slice A and says that it is a. Yet both Smith and Jones “agree on the facts”. This must mean that what Smith and Jones say about A is over and above the facts. In addition to facts, Smith and Jones needed to bring in theory, conceptual decisions, prior semantics, labels, etc. into the discussion.

The given facts may well be determinate. However, it doesn’t follow from this that everything we say about them is also determinate. Or, in another manner of speaking, the facts alone don’t entail what we say about them.

Some readers may now wonder how this clean and neat distinction between facts (or Trefil’s doings and sets of attributes) and what we say about them can be upheld. After all, aren’t the facts (or what we take to be the facts) themselves somewhat dependent on labels and what we say? (David Chalmers, for example, doesn’t only argue that what we say is indeterminate. He also argues that “the facts [themselves] are indeterminate”.)

Much of what’s just been said is fairly standard in science, and in the philosophy of science. The very same facts (or doings) about a bacterium, machine or animal may engender different theories or labels. Indeed, some philosophers have argued that the very same facts could bring about a (possibly) infinite amount of theories or labels. (This situation is called the underdetermination of theory by the data and has been widely discussed in philosophy.)

And here again we can question the clean and neat separation of empirical data from the theories and labels which, Trefil seems to suppose, may come later.

Trefil on Dennett’s Consciousness Explained

Just to show that Trefil, qua physicist, wasn’t an eliminativist when it comes to consciousness, take Trefil’s words on Daniel Dennett’s book Consciousness Explained. He wrote:

“The first time I read his book, I became confused because about halfway through I began to think, ‘Hey — this guy doesn’t think that consciousness exists.”

Perhaps Dennett didn’t actually believe that consciousness doesn’t exist. Instead, perhaps Dennett defined the word ‘consciousness’ in such a way that it made Trefil believe that he didn’t believe that consciousness exists. After all, this would be a valid application of Trefil’s own words and ideas on definitions and “labels”.

Oddly enough, Dennett’s notion of heterophenomenology is exactly the kind of approach which one would expect Trefil to by sympathetic to. In this approach, Dennett simply analysed what people said about their consciousness and subjective states, and ignored the possible ontology of consciousness, qualia and subjective states. (Dennett also factored in other kinds of behaviour, as well as bodily changes.) Yes, this is a scientific approach which is in tune with much that Trefil himself says in his book. Take the following words from Trefil:

“I will try to stick to descriptions of capabilities and leave labeling to you. It’s the only way I’ve found to keep things from getting bogged down in semantics.”

Now that isn’t exactly heterophenomenology, yet it’s still in the same ballpark. Trefil isn’t discussing the ontology of consciousness, only descriptions of capabilities.

Kinds of Intelligence

One solution Trefil offers to the semantic problem is to stress kinds of intelligence, rather than Platonic Intelligence. He even gives the kinds different names, such as “Intelligence I” and “Intelligence II”. Of course, the names Trefil chose were dependent on behaviour and observable attributes. His first example goes as follows:

“[H]umans are not very adept at paying attention to several things at once — think of the last time you were trying to eavesdrop on two separate conversations at a cocktail party.”

Readers may now expect Trefil to give a contrasting example from the animal world. Instead, he chooses an extraterrestrial. He concludes:

“An extra-terrestrial whose ancestors had found this particular trait useful might, in fact, conclude that humans were quite stupid because they couldn’t listen to four conversations and two bands at the same time.”

Trefil isn’t denying that humans can listen to music and write at the same time, drive a car and hold a conversation, etc. However, this extraterrestrial example seems to be on another level.

Trefil gives a more relevant and important reason for thinking in terms of kinds of intelligence. He wrote:

“[W]e suggested that while it is possible to build machines that are intelligent, or even conscious, we have to recognise that these words are being used differently when we apply them to a machine and to human beings. A chess-playing machine, for example, just doesn’t approach the game the way a human being does.”

There is a problem here. Even if there are kinds of intelligence, the word “intelligence” is still being used for all of them. Why? Couldn’t we use different words to describe different cases? If this alternative isn’t accepted, then surely that must be because all kinds of intelligence share something. Do they share intelligence? That’s not a good question. So they must share something else instead.

No comments:

Post a Comment