Sunday, 29 March 2026

Do We Mean Different Things By ‘Consciousness’ and ‘Intelligence’?

 

I came across a book called Are We Unique? which expresses views which I’ve been developing for some time. They writer is the physicist James Trefil. It nearly all amounts to the distinction we make between words (or Trefil’s “labels”) and things. Trefil, qua scientist, was involved in lots of pointless disputes about what consciousness and intelligence are, which often boiled down to “mere semantics”. As an alternative, Trefil suggested focussing on what machines and animals actually do (as well as on their “sets of attributes”), not on any labels we may wish to use. This approach isn’t fool proof, but it is worth pursuing.

Does Everyone Know What Consciousness Is?

Everyone knows what consciousness is? Right? After all, it’s a (according to Philip Goff) “first-person datum which we are more sure of than anything else”. However, James Trefil spots what should be an obvious problem here when he writes the following:

“The problem is that we all think [‘consciousness’] means something different.”

What’s more:

“Since everyone feels that he or she ‘owns’ the word [‘consciousness], enormous arguments ensue when people feel that their own ownership of the word is being threatened by someone else’s usage.”

Many readers will have noticed how convinced people are that they use the word ‘consciousness’ correctly, or that their own pet theory is correct. This applies to both laypersons and to philosophers. So, understandably, when someone else comes along and defines the same word in a different way, people do feel threatened. Of course, the amount of time people have spent thinking about consciousness will largely determine how strongly they feel about the correct usage.

This is when many people get mad with what they often call “mere semantics”. “It’s not about words: it’s about a thing.” Yet we name that thing with the same word even though other people mean different things when they use the same word. Thus, mere semantics seems to be a common sense approach — at least according to the physicist James Trefil.

Indeed, even in the “field of consciousness studies” we have “the appearance of words that most people think they understand, but which have widely different meanings for different people”. So this isn’t only about laypersons. In fact, philosophers are more likely to disagree with one another than laypersons when it comes to consciousness and intelligence. Laypersons will rarely disagree with each other simply because they’re never put on the spot as to what consciousness and intelligence are. So, in a sense, agreement is guaranteed.

It was no wonder that Trefil had a problem with the endless debates about “What is consciousness?” and “What is intelligence?”. He expressed that frustration in a story:

“Would you believe it if I told you that this group of professors and academics spent two hours in heated discussion, and in the end could not agree on a common definition of the word brain, never mind consciousness or anything else.”

Yet Trefil noted that they did agree on other things, such as what machines and animals do, as well as on the sets of attributes which are taken to constitute consciousness or intelligence.

Trefil then gave a reason for why there is such dispute among experts. He tackled the word “intelligence”. He stated that this word is “supposed to cover everything from an octopus to a human to a chess-playing computer like Deep Blue”.

Animal and Machine Doings

Take a machine that does many of the things human persons can do. Trefil wrote:

“If you were confronted with such a machine, it would be hard to argue that it wasn’t intelligent, or even conscious, no matter how you defined those terms.”

It’s clear here that many people believe that there’s something over and above behaviour and doings when it comes to consciousness, and even when it comes to intelligence. This also means that certain definitions may well rule out machines… by definition. What would those exclusive definitions be? What attributes would they refer to?

What about a bacterium?

Is a bacterium intelligent? Trefil says, Don’t ask that question. Instead, he tells us that a bacterium “swim[s] away from a chemical toxin”. Yes, and? Now can’t we simply ask the following question? — Is that a behaviour that at least partially exhibits intelligence? Trefil himself writes:

“If what we observe is behaviour, the question of whether that behaviour implies intelligence is one of interpretation and, in the end, semantics.”

Surely when we observe a bacterium swimming away from a toxin, all we are doing is observing a bacterium swimming away from a toxin. That behaviour, or that kind of behaviour, will surely need another term to account for it. Naked behaviour is never enough for any kind of scientist. Yet Trefil may still be correct to say that if we use the word “intelligence” to capture this specific behaviour, then this is a matter of interpretation and semantics. Actually, I’m not even sure if the word “interpretation” is right here. Isn’t a stipulative definition and decision made when it comes to deciding which kinds of behaviour can be classed as “intelligent”? Sure, the bacterium swimming away from a toxin either occurs or it doesn’t. However, and as stated, this kind of behaviour must be classed in some way.

Don’t Use the Word ‘Consciousness’

Trefil suggested not using the word “consciousness” at all. Not because he believed “consciousness doesn’t exist”, but for the reasons just given.

Firstly, Trefil stated that he “describes particular systems as accurately as I can”. Secondly, he “let the audience decide whether the word applies to that particular system”. To Trefil, what matters is “stat[ing] what the animal or machine can do”, and then,

“leave it to our audience to decide whether they want to apply the concept of intelligence or consciousness or self-awareness to something that possesses that particular set of attributes”.

Elsewhere in the same book, Trefil asks two questions:

“What would a machine have to do to be labelled ‘conscious’? For that matter, what would it take for us to call a chimpanzee ‘conscious’?”

The labelling comes after the fact. It comes after we discover what a machine or chimpanzee can do. In theory, everyone could agree on what a machine can do, yet hotly disagree on whether this constitutes being conscious.

The statement “what the animal or machine can do” sounds like a kind of behaviourism. It isn’t really. The behaviourists effectively factored out consciousness from the picture. Trefil, on the other hand, clearly didn’t want to do that. After all, he was gracious enough to allow his audience to apply the concept intelligence or consciousness… after the fact. Again, what was important to Trefil is what machines and animals do, as well as their “particular set of attributes”.

In Philosophese

According to the many philosophers, the notion of merely verbal dispute (or merely semantics) occurs partly because of the following.

Trefil stresses that all disputants — in broad terms — have access to the same behaviour and sets of attributes. However, they label such things in different ways. So let’s put this in philosophese.

Philosopher-scientist Smith has access to all the facts, laws, information, etc. about spatiotemporal slice A and says that it is x. Philosopher-scientist Jones has access to all the same facts about the same spatiotemporal slice A and says that it is a. Yet both Smith and Jones “agree on the facts”. This must mean that what Smith and Jones say about A is over and above the facts. In addition to facts, Smith and Jones needed to bring in theory, conceptual decisions, prior semantics, labels, etc. into the discussion.

The given facts may well be determinate. However, it doesn’t follow from this that everything we say about them is also determinate. Or, in another manner of speaking, the facts alone don’t entail what we say about them.

Some readers may now wonder how this clean and neat distinction between facts (or Trefil’s doings and sets of attributes) and what we say about them can be upheld. After all, aren’t the facts (or what we take to be the facts) themselves somewhat dependent on labels and what we say? (David Chalmers, for example, doesn’t only argue that what we say is indeterminate. He also argues that “the facts [themselves] are indeterminate”.)

Much of what’s just been said is fairly standard in science, and in the philosophy of science. The very same facts (or doings) about a bacterium, machine or animal may engender different theories or labels. Indeed, some philosophers have argued that the very same facts could bring about a (possibly) infinite amount of theories or labels. (This situation is called the underdetermination of theory by the data and has been widely discussed in philosophy.)

And here again we can question the clean and neat separation of empirical data from the theories and labels which, Trefil seems to suppose, may come later.

Trefil on Dennett’s Consciousness Explained

Just to show that Trefil, qua physicist, wasn’t an eliminativist when it comes to consciousness, take Trefil’s words on Daniel Dennett’s book Consciousness Explained. He wrote:

“The first time I read his book, I became confused because about halfway through I began to think, ‘Hey — this guy doesn’t think that consciousness exists.”

Perhaps Dennett didn’t actually believe that consciousness doesn’t exist. Instead, perhaps Dennett defined the word ‘consciousness’ in such a way that it made Trefil believe that he didn’t believe that consciousness exists. After all, this would be a valid application of Trefil’s own words and ideas on definitions and “labels”.

Oddly enough, Dennett’s notion of heterophenomenology is exactly the kind of approach which one would expect Trefil to by sympathetic to. In this approach, Dennett simply analysed what people said about their consciousness and subjective states, and ignored the possible ontology of consciousness, qualia and subjective states. (Dennett also factored in other kinds of behaviour, as well as bodily changes.) Yes, this is a scientific approach which is in tune with much that Trefil himself says in his book. Take the following words from Trefil:

“I will try to stick to descriptions of capabilities and leave labeling to you. It’s the only way I’ve found to keep things from getting bogged down in semantics.”

Now that isn’t exactly heterophenomenology, yet it’s still in the same ballpark. Trefil isn’t discussing the ontology of consciousness, only descriptions of capabilities.

Kinds of Intelligence

One solution Trefil offers to the semantic problem is to stress kinds of intelligence, rather than Platonic Intelligence. He even gives the kinds different names, such as “Intelligence I” and “Intelligence II”. Of course, the names Trefil chose were dependent on behaviour and observable attributes. His first example goes as follows:

“[H]umans are not very adept at paying attention to several things at once — think of the last time you were trying to eavesdrop on two separate conversations at a cocktail party.”

Readers may now expect Trefil to give a contrasting example from the animal world. Instead, he chooses an extraterrestrial. He concludes:

“An extra-terrestrial whose ancestors had found this particular trait useful might, in fact, conclude that humans were quite stupid because they couldn’t listen to four conversations and two bands at the same time.”

Trefil isn’t denying that humans can listen to music and write at the same time, drive a car and hold a conversation, etc. However, this extraterrestrial example seems to be on another level.

Trefil gives a more relevant and important reason for thinking in terms of kinds of intelligence. He wrote:

“[W]e suggested that while it is possible to build machines that are intelligent, or even conscious, we have to recognise that these words are being used differently when we apply them to a machine and to human beings. A chess-playing machine, for example, just doesn’t approach the game the way a human being does.”

There is a problem here. Even if there are kinds of intelligence, the word “intelligence” is still being used for all of them. Why? Couldn’t we use different words to describe different cases? If this alternative isn’t accepted, then surely that must be because all kinds of intelligence share something. Do they share intelligence? That’s not a good question. So they must share something else instead.

Sunday, 22 March 2026

Liam Kofi Bright on Analytic Philosophers Fiddling, While the World Burns

 


This short piece is a response to Liam Kofi Bright’s essay ‘The End of Analytic Philosophy. (It’s partially a response to Walter Veit’s same-titled ‘The End of Analytic Philosophy’ too.) Bright argues that young philosophers want to “change the world, not just understand it”. This is, of course, a rewriting of Marx’s well-known words: “Philosophers have only interpreted the world, the point however is to change it.” Marx didn’t mention understanding at all. He didn’t even conclude, “not just interpret the world”. Bright, on the other hand, does say, “not just understand it”. Yet I’d argue that this is a difference that effectively doesn’t make a difference when the goal of radical political change is always and forever in the driving seat.

Liam Kofi Bright
“[W]hen students ask us why they should major in something as apparently fanciful as philosophy while the world burns, we want to have something to say besides ‘stick with us, we’re pretty sure that any day now we will have a viable theory of reference magnetism’.”

Liam Kofi Bright


Liam Bright’s Own Successor Paradigm

Liam Bright talks about there being no “successor paradigm” to analytic philosophy. Some readers of his work may assume that Bright actually does have a successor paradigm in mind — his own! Yet that won’t really be a successor paradigm at all: it’ll be a complete alternative. That explains why Bright believes that no successor paradigm could replace analytic philosophy.

All this chimes in with Christoph Schuringa’s ideas — and in many ways. (Schuringa’s relevant ideas can be found in his ‘The never-ending death of analytic philosophy’.)

Schuringa has written a lot on Marx. So perhaps Marx’s own well-known words against philosophy may well sum up Schuringa’s own position on analytic philosophy. Indeed, there’s a reference to “Marx on the ‘supersession’ of philosophy” in Schuringa’s academic webpage. Now are Schuringa’s words “supersession of philosophy” basically the same as Bright’s “successor paradigm”?

Both Bright and Schuringa believe that analytic philosophy is politically flawed and politically compromised. One example, among many, is the following extract from Schuringa:

“Philosophy as a discipline has a huge whiteness problem, and it is right that the hegemony of Western philosophy in the academy must be addressed if the curriculum is to be effectively decolonized.”

The above isn’t quite as shouty as Professor van Norden’s words (as can be found in his ‘Western philosophy is racist’):

“Mainstream philosophy in the so-called West is narrow-minded, unimaginative, and even xenophobic. [ ] Academic philosophy in ‘the West’ ignores and disdains the thought traditions of China, India and Africa. This must change.”

This means that Professor Norden trumps Professor Schuringa in that he widens the target to “mainstream philosophy” and “Western philosophy” as a whole.

Perhaps all this partially explains why, according to Walter Veit, naturalised philosophy wasn’t even considered by Liam Bright. That’s because this “third kind of philosophy” will inevitably be deemed to be politically flawed and politically compromised too.

Others on the Death of Analytic Philosophy

Many of those who write titles like ‘The never-ending death of analytic philosophy’, ‘The End of Philosophy’, etc. come at this issue from an almost exclusively political angle.

Bright and Schuringa themselves aren’t former analytic philosophers who rebelled against it, but were always on the outside looking in. The following is from one of Schuringa’s early academic biographies:

“I am Assistant Professor in Philosophy at the New College of the Humanities (part of Northeastern University), and Editor of the Hegel Bulletin.[ ] My chief interests are in the history of philosophy (especially Kant, Hegel, and Marx), in the traditions of Marxism and critical theory, and in social and political thought more widely. Specific current research projects concern Marx’s critique of Hegel, the concept of Gattungswesen, and ‘tragic’ conceptions of philosophy. My large-scale current project is a monograph I am writing on Marx. [ ] I was recently in the Philosopher’s Zone [ ] talking about the history, and prospects, of analytic philosophy [ ].”

There’s not much about analytic philosophy in the biography above — save for the final bit about “talking about the history, and prospects, of analytic philosophy”. In addition, none of Schuringa’s publications, dating back to 2011, can be classed as analytic philosophy.

Similarly when one looks at Liam Bright’s publications — no analytic philosophy at all. Take these titles dating back to 2014:

‘On The Stability of Racial Capitalism’ (2025), ‘Du Boisian Leadership through Standpoint Epistemology’ (2024), ‘To Be Scientific Is To Be Communist’ (2023), ‘White Psychodrama’ (2023), ‘Causally Interpreting Intersectionality Theory’ (2016), and Bright’s first paper, ‘What is the State of Blacks in Philosophy?’ (2014).

Now there’s nothing wrong with any of these subjects being studied and written about. It’s just that they aren’t analytic philosophy. They are classic examples of academic political activism.

Neither Schuringa nor Bright are like Richard Rorty in these respects. They didn’t take umbrage at analytic philosophy after writing many — or even just some — papers within that tradition. They didn’t grow bored with analytic philosophy either, as some argue Rorty did. Instead, Schuringa and Bright probably had problems with analytic philosophy from the very beginnings of their publishing careers, if not even before that.

True of All Academics

Is there a (to use Bright’s words) “well-validated and rational-consensus-generating theory of grand topics of interest” in other types of philosophy? In all other disciplines? What’s more, is it necessary that there should be “grand topics"? Much of what Bright says about analytic philosophy can be said about other types of philosophy and other disciplines too. But he doesn’t say it about them.

Bright states the following:

“Many philosophers strike me as like Polish apparatchiks in 1983 — they turn up to work and do what they did yesterday just because they don’t know what else to do, not because they seriously believe in the system they are maintaining.”

Again, all the above can be said about all types of professional philosophers. It can be said about sociologists, physicists, political scientists, etc. too. What’s more, it can even be said about academic philosophers with strong political biases, especially if their own brands of politicised philosophy chime in well with the departments they teach at.

Bright also tells us that

“it’s not been fully appreciated how much of a blow it is to the confidence of the field’s youth that scientific ambitions are increasingly abandoned as untenable”.

Readers may wonder if Bright has done any surveys or empirical research on what “the field’s youth” believe when it comes to analytic philosophy (or its “triple failure of confidence”). However, Bright does state the following:

“My anecdotal impression is that junior philosophers are hyper aware of these bleak prospects for anything like creation of a shared scientific paradigm.”

Without examples and detail, it’s hard to know which “problems” Bright is referring to. So it can’t be known if they have been “solve[d]” or not. What’s more, who, exactly, decides whether a problem is “worth solving in the first place”?

“Now is a time of woe for analytic philosophy.”

Bright’s essay, at least the extracts selected by Walter Veit, is hyperbolic and highly general.

For a start, why should analytic philosophy be seen as (or even be) “a [single!] research programme”? And why should there be a single “shared project”? Moreover, the examples Bright gives of a shared project (“analysing key concepts or a mutual commitment to the linguistic turn”) weren’t really “projects” at all — they were ways or methods of doing philosophy.

Oddly, Bright himself acknowledges that “the lack of such shared projects in themselves didn’t really cause a problem for the field”. On this, Bright mentions Rorty’s position that “analytic philosophy is held together mainly by a certain kind of style and sociological bonds among its practitioners”.

Analytic Philosophy and Science

Veit quotes Bright saying that “[a]nalytic philosophy has long had ambitions to something like scientific status”. That statement should really be rewritten as:

Some — perhaps even many — analytic philosophers in the 1920s to the 1950s had ambitions to make their work more scientific in nature. And, yes, sometimes some philosophers who had this ambition did reach the point of “cringingly insecure self parody”.

Interestingly, this was also the case with philosophers well outside the analytic philosophy tradition, such as Karl Marx, Edmund Husserl, Jacques Lacan, Ferdinand de Saussure, Louis Althusser, Michel Foucault, Bruno Latour, etc. This is something Veit too recognises when he says that

in many ways naturalist philosophy can often be closer to the work of continental philosophers, who had naturalist leanings, such as Friedrich Nietzsche, Edmund Husserl, Maurice Merleau-Ponty, or Georges Canguilhem”.

Colin McGinn on Philosophy and Science

In a sense, Bright’s point about the “science envy” (a cliché he doesn’t actually use) which analytic philosophers supposedly suffer from is answered in an article he actually links to in his piece. In that piece for the New York Times by Colin McGinn (Name Calling: Philosophy as Ontical Science’), he writes:

“[P]hilosophy so conceived is best classified as a science, because of its rigor, technicality, universality, falsifiability, connection with other sciences, and concern with the nature of objective being (among other reasons).”

McGinn then added:

“I did not claim, however, that it is an empirical science, like physics and chemistry [ ].”

This helps show readers that the New York Times piece which Bright links to shows us that he treats analytic philosophers almost as straw targets. No philosopher has ever argued that analytic philosophy is an empirical science. In fact, Wittgenstein and many others went out of their way to say that philosophy isn’t a science. This is odd, then, because Bright has a penchant for the logical positivists, who did see their philosophies as being “scientific”. Yet this too all boils down to politics. Bright has a penchant for the logical positivists because some of them were socialists. (Otto Neurath and Rudolf Carnap were socialists, as well as some lesser names.) So some of them, as a consequence, attempted to link philosophy (or their philosophies) to politics and societal change, just like Bright himself. (The logical-positivist movement actually included “conservative right”, “radical left”,” and “liberal” wings, who all lived and worked within “Red Vienna”.)

To return to McGinn.

McGinn does state that “philosophy so conceived is best classified as a science”. Personally, I’m not sure about that. However, what matters to me is that good philosophy does employ “rigor, technicality, universality, falsifiability, connection with other sciences, and concern with the nature of objective being”… To qualify even more. I’m not even sure about all those characteristics either. McGinn has smuggled philosophical positions in with philosophical methods. (For example, “universality” and “the concern with the nature of objective being” are positions within philosophy, not ways of doing philosophy.) However, rigor, varying degrees of technicality and connections with the sciences are indeed important.

Whether all this makes such philosophy a science is another matter.

Science and Philosophical Naturalism

Walter Veit says that “[p]hilosophy in this vision is part of science”. He explains:

“Naturalist philosophers are excited about the progress enabled on old philosophical problems with the aid of the sciences, be that the mind, the nature of life, or the structure of reality.”

Veit even offers his readers some concrete examples of what philosophy could be like. He continues:

“[I] was able to work together with scientists, such as Nicola Clayton’s corvid lab, to bring us closer to answering what it is like to be a crow, my work with biologists at Oxford in measuring biological complexity, or my ongoing work on several projects together with animal welfare scientists.”

Is all that (as Bright puts it) “cringingly insecure self parody” too?

Veit also states the following:

“Some of my readers, no doubt, will already be skeptical of analytic philosophy, not because they necessarily share my naturalist view of how the field should operate, but because they have a fondness for philosophers such as Nietzsche and the like that fill popular book sections.”

I believe most of the above also goes for Bright, although not in exactly the same way. Bright has a fondness for philosophers who’re explicitly political (just like himself), regardless of how the “field of analytic philosophy should operate”.


Note:

Liam Bright has something to say about “puzzles” too:

“[W]e will, keep generating puzzles for any particular answer given, we will never persuade our colleagues who disagree, we will never finally settle what to say about the simple cases in order to be able to move on to the grand problems of philosophy.”

[Note the royal ‘we’!]

I too sometimes find some philosophical puzzles annoying. The problem here is that what many of the critics class as the “puzzles of analytic philosophy” aren’t actually puzzles in any strict sense of the word. Yet there are indeed puzzles in analytic philosophy: it’s just that critics deem many of the central subjects to be “mere puzzles” too.

Saturday, 21 March 2026

How and Why I Anthropomorphise Chatbots

 


This is a partly “confessional” essay about my own anthropomorphic tendencies when it comes to interacting with chatbots. I’ve tried to make sense of that by thinking about the history and psychology of anthropomorphism. I also cite examples of anthropomorphising, ranging from my own, a triangle perceived to be a bully, AI toys classed as “evil”, and a robot with and without a face.

Image created by ChatGPT, under the writer’s prompts.

Here are five confessional examples of my own anthropomorphic tendencies:

(1) Saying “Thanks” to a chatbot. (2) Thinking I’m tiring a Chatbot out or boring it when I go into detail. (3) Embarrassment about certain details or questions. (4) Taking a chatbots’ criticisms personally. (5) Getting annoyed with a chatbot.

There are, of course, other examples I could have given.

Now following my confessions, here are a few qualifications. When talking to a chatbot, I only feel as if I’m talking to a human person. That’s even though, every now and again, I need to (metaphorically) pinch my arm, and then tell myself that this isn’t a real person. However, I never actually believe that I’m talking to a human person. What’s the difference? For me, it seems to be easier to suspend disbelief in that it makes the conversation (the word “conversation” itself may be an anthropomorphism) easier and more satisfying.

One psychologist, Norman N. Holland, even provided a neuroscientific theory of suspension of disbelief. He argued that when a person watches a film, looks at a painting, etc. (or engages with a chatbot) the brain goes entirely into a “perceiving mode”. In other words, the brain as a whole cuts out those planning to act and judge faculties needn’t in day to day life.

The KTH AI Society and Other Sceptics

The KTH AI Society (see here) tackles the issue of anthropomorphism in the following passage:

“Cleverly written rules and human irrationality has convinced many people that the chatbots they are communicating with online are real people.”

Is this claim true?

Perhaps many people are in the same situation which I described about myself earlier. Of course, some people will believe that they’re talking to a real person. Then again, some people believe they converse with aliens or get feedback from trees.

The KTH AI Society continued with a passage that can be taken to be plain wrong:

“Even with chatbots that have no ability to learn and are clearly not what we would consider ‘intelligent’, it is surprisingly easy to trick people into believing that their conversation partner is real.”

Note the scare quotes around the word “intelligent”. Whatever we believe about intelligence, the AI which does or does not display it needn’t also be taken to be a person.

In response to the sceptics and the KTH AI Society. One interesting phenomenon that’s worth commenting on is the view that chatbots, robots, etc. can develop human-like emotions even when they haven’t been programmed to do so. Thus, to use philosophy-speak, emotions emerge from the existing programming and hardware. [See note.]

Innate Anthropomorphism

These kinds of anthropomorphic tendencies would continue even if there were a categorical proof — as if! — that AIs weren’t conscious. It can be said that they’re hardwired into us all. It’s not surprising, then, that psychologists consider anthropomorphism to be innate. So what I’m guilty of will also apply to virtually all the adult persons who interact with chatbots. (I suspect that at least some adults with vociferously deny this.)

Just think about how wide and far anthropocentrism stretches. Both in and out of fiction, we have talking animals, talking trees, and sentient toys (see later). Human persons also “personify” nations and even races.

Regardless of innateness and ancient history, it can still be said that computer theorists and programmers are partly responsible for all this anthropomorphising. After all, it was these people who introduced such terms as “reason”, “think”, “hallucinate”, “catch a virus”, “decide”, “write” and “read”, “memory”, etc. into the language which we use to talk about AI.

Despite all that, it’s worth bringing in here what may seem like a technical point. There is some disagreement about these matters among experts. For example, what I’ve described as anthropomorphism can simply be seen in terms of “predictions” about the AI’s behaviour.

Good Anthropomorphism?

What’s important to realise is that anthropomorphism is sometimes acceptable and even wise. There’s a problem here, though, in that if someone knows full well that he or she is anthropomorphising, then is he or she anthropomorphising at all?

It has to be said that some examples — if they are examples at all — of anthropomorphism are deemed to be good things. For example, people with depression, social anxiety, or other mental illnesses can interact with emotional support animals. These animals are deemed to be a useful component of treatment.

In addition, believing certain anthropomorphic things about a computer, robot or chatbot may serve various psychological and even practical purposes. It can help in terms of understanding, as with metaphors and analogies in science. On the other hand, some views about computers, robots and chatbots are simply false, and dangerous too. One of the most controversial examples of the latter are the many cases of human persons who anthropomorphise chatbots because they are lonely.

Adults, Children and Chatbots

There’s a useful distinction to make here. That involves distinguishing basic anthropomorphism, which is exemplified by children (such as ascribing human characteristics to animals, robots, flowers, trees, etc.), from the more controversial examples of ascribing more abstract human characteristics to chatbots, robots, AI generally (such as intelligence, consciousness and even emotion). Some would argue that the second set is basically the same as the first, only more abstract and general. Others would argue that these examples aren’t anthropomorphic at all.

As just stated. Some readers will be keen to draw a distinction between young children being anthropomorphic and adults being so. Children are very keen on cartoons of animals with human characteristics. However, it can be argued that even in the case of children this isn’t genuine or deep anthropomorphism. In a loose sense, aren’t most children, just like adults, simply suspending disbelief?

Experts explain the way children anthropomorphize in terms of their early socialization. In detail. When young children come across entities or animals which aren’t human, they have little alternative but to anthropomorphise. On the surface at least, this puts them at odds with adults.

Examples

The Triangle Bully

One of the most extreme forms of documented anthropomorphism occurred way back in 1944. This was a study carried out by Fritz Heider and Marianne Simmel. The experiment was very simple. The researchers showed various subjects a 2-and-a-half-minute cartoon of shapes moving around at various speeds and in different directions. The subjects were then simply asked to describe what they saw. They did so in terms of the shapes’ personalities and intentions. In terms of examples. The large triangle on the screen was described as being a “bully” because it was “chasing” the other two shapes. These latter shapes, on the other hand, were described as “tricking” the large triangle and attempting to “escape”.

The researchers interpreted the anthropomorphism in terms of the shapes’ movements having no obvious cause. In other words, the subjects took an intentional stance toward the shapes… Or did they? The intentional stance relates to the earlier discussion of suspending disbelief. It’s a predictive strategy described by Daniel Dennett. Taking up the intentional stance means that we interpret the behaviour of entities (chatbots, machines and humans) by treating them all as rational agents who and which have beliefs and desires. Why do that? This is said to simplify complex systems by assuming they’ll act to achieve goals based on their beliefs and desires. This also ties in with the earlier remarks about anthropomorphism being “predictive”.

Here’s another mundane and even laughable example. Take the case of robots carrying out childcare or driving a car. Now firstly imagine a robot with a face and name driving a car or looking after a child. Now imagine a robot without a name, voice and face doing exactly the same thing.

The Evil AI Furby

If I can refer back to a previous essay in which I discussed an AI Furby. The presenter of a YouTube programme (called ‘ChatGPT in a kids robot does exactly what experts warned’) seemed to recognise the instinct for anthropomorphism in human beings. He said:

“I know that AI is only playing a character, but it may as well be real, you know, because people can still use it like that.
“Roleplaying is just putting an AI’s capability inside a character mask.”

So let me put the frequent anthropomorphism of this video in a little context. Take these words, which are also from the presenter:

“People [in the late 1990s] claimed their Furbies were giving them secret messages and listening to them.”

This is a reference to the (non-AI) Furby of 1998, some 14 years before the “AI revolution” of 2012. This highlights the anthropomorphic bent of the human species. In this case, the paranoia is very familiar. It was brought about by many people misunderstanding the toy’s technology, high-tech anxiety, and even the “demonic” reputation the 1998 Furby developed.

In terms of examples. When the batteries of Furbies ran low, this caused their speech to deepen and slow down. That was the “demonic voice” and “death rattle”. There was also a concern that if you were “mean” to a Furby, it would “learn” to be mean back.

Conclusion

Two extremes can be cited when it comes to chatbots. On the positive side, chatbots can be used productively. On the negative side, some individuals fall in love with chatbots (or the roles they play under prompting). Arguably, even a positive use of a chatbot may include — or even require — a certain degree of anthropomorphism. This raises a point which was expressed above. If there’s a conscious (if not articulated) suspension of belief in these productive cases, is this still anthropomorphism?


Note:

Since many believe that emergence is real, then this isn’t a surprise. (Whether this is strong or weak emergence is another matter.) After all, with a vast amount of data, numerous abstract and physical operations, hardware, software, etc., then why can’t something strange or unknown happen to an AI entity which could be classed as emergent? That said, it needn’t be ontologically emergent, only epistemically so.