This is a partly “confessional” essay about my own anthropomorphic tendencies when it comes to interacting with chatbots. I’ve tried to make sense of that by thinking about the history and psychology of anthropomorphism. I also cite examples of anthropomorphising, ranging from my own, a triangle perceived to be a bully, AI toys classed as “evil”, and a robot with and without a face.

Here are five confessional examples of my own anthropomorphic tendencies:
(1) Saying “Thanks” to a chatbot. (2) Thinking I’m tiring a Chatbot out or boring it when I go into detail. (3) Embarrassment about certain details or questions. (4) Taking a chatbots’ criticisms personally. (5) Getting annoyed with a chatbot.
There are, of course, other examples I could have given.
Now following my confessions, here are a few qualifications. When talking to a chatbot, I only feel as if I’m talking to a human person. That’s even though, every now and again, I need to (metaphorically) pinch my arm, and then tell myself that this isn’t a real person. However, I never actually believe that I’m talking to a human person. What’s the difference? For me, it seems to be easier to suspend disbelief in that it makes the conversation (the word “conversation” itself may be an anthropomorphism) easier and more satisfying.
One psychologist, Norman N. Holland, even provided a neuroscientific theory of suspension of disbelief. He argued that when a person watches a film, looks at a painting, etc. (or engages with a chatbot) the brain goes entirely into a “perceiving mode”. In other words, the brain as a whole cuts out those planning to act and judge faculties needn’t in day to day life.
The KTH AI Society and Other Sceptics
The KTH AI Society (see here) tackles the issue of anthropomorphism in the following passage:
“Cleverly written rules and human irrationality has convinced many people that the chatbots they are communicating with online are real people.”
Is this claim true?
Perhaps many people are in the same situation which I described about myself earlier. Of course, some people will believe that they’re talking to a real person. Then again, some people believe they converse with aliens or get feedback from trees.
The KTH AI Society continued with a passage that can be taken to be plain wrong:
“Even with chatbots that have no ability to learn and are clearly not what we would consider ‘intelligent’, it is surprisingly easy to trick people into believing that their conversation partner is real.”
Note the scare quotes around the word “intelligent”. Whatever we believe about intelligence, the AI which does or does not display it needn’t also be taken to be a person.
In response to the sceptics and the KTH AI Society. One interesting phenomenon that’s worth commenting on is the view that chatbots, robots, etc. can develop human-like emotions even when they haven’t been programmed to do so. Thus, to use philosophy-speak, emotions emerge from the existing programming and hardware. [See note.]
Innate Anthropomorphism
These kinds of anthropomorphic tendencies would continue even if there were a categorical proof — as if! — that AIs weren’t conscious. It can be said that they’re hardwired into us all. It’s not surprising, then, that psychologists consider anthropomorphism to be innate. So what I’m guilty of will also apply to virtually all the adult persons who interact with chatbots. (I suspect that at least some adults with vociferously deny this.)
Just think about how wide and far anthropocentrism stretches. Both in and out of fiction, we have talking animals, talking trees, and sentient toys (see later). Human persons also “personify” nations and even races.
Regardless of innateness and ancient history, it can still be said that computer theorists and programmers are partly responsible for all this anthropomorphising. After all, it was these people who introduced such terms as “reason”, “think”, “hallucinate”, “catch a virus”, “decide”, “write” and “read”, “memory”, etc. into the language which we use to talk about AI.
Despite all that, it’s worth bringing in here what may seem like a technical point. There is some disagreement about these matters among experts. For example, what I’ve described as anthropomorphism can simply be seen in terms of “predictions” about the AI’s behaviour.
Good Anthropomorphism?
What’s important to realise is that anthropomorphism is sometimes acceptable and even wise. There’s a problem here, though, in that if someone knows full well that he or she is anthropomorphising, then is he or she anthropomorphising at all?
It has to be said that some examples — if they are examples at all — of anthropomorphism are deemed to be good things. For example, people with depression, social anxiety, or other mental illnesses can interact with emotional support animals. These animals are deemed to be a useful component of treatment.
In addition, believing certain anthropomorphic things about a computer, robot or chatbot may serve various psychological and even practical purposes. It can help in terms of understanding, as with metaphors and analogies in science. On the other hand, some views about computers, robots and chatbots are simply false, and dangerous too. One of the most controversial examples of the latter are the many cases of human persons who anthropomorphise chatbots because they are lonely.
Adults, Children and Chatbots
There’s a useful distinction to make here. That involves distinguishing basic anthropomorphism, which is exemplified by children (such as ascribing human characteristics to animals, robots, flowers, trees, etc.), from the more controversial examples of ascribing more abstract human characteristics to chatbots, robots, AI generally (such as intelligence, consciousness and even emotion). Some would argue that the second set is basically the same as the first, only more abstract and general. Others would argue that these examples aren’t anthropomorphic at all.
As just stated. Some readers will be keen to draw a distinction between young children being anthropomorphic and adults being so. Children are very keen on cartoons of animals with human characteristics. However, it can be argued that even in the case of children this isn’t genuine or deep anthropomorphism. In a loose sense, aren’t most children, just like adults, simply suspending disbelief?
Experts explain the way children anthropomorphize in terms of their early socialization. In detail. When young children come across entities or animals which aren’t human, they have little alternative but to anthropomorphise. On the surface at least, this puts them at odds with adults.
Examples
The Triangle Bully
One of the most extreme forms of documented anthropomorphism occurred way back in 1944. This was a study carried out by Fritz Heider and Marianne Simmel. The experiment was very simple. The researchers showed various subjects a 2-and-a-half-minute cartoon of shapes moving around at various speeds and in different directions. The subjects were then simply asked to describe what they saw. They did so in terms of the shapes’ personalities and intentions. In terms of examples. The large triangle on the screen was described as being a “bully” because it was “chasing” the other two shapes. These latter shapes, on the other hand, were described as “tricking” the large triangle and attempting to “escape”.
The researchers interpreted the anthropomorphism in terms of the shapes’ movements having no obvious cause. In other words, the subjects took an intentional stance toward the shapes… Or did they? The intentional stance relates to the earlier discussion of suspending disbelief. It’s a predictive strategy described by Daniel Dennett. Taking up the intentional stance means that we interpret the behaviour of entities (chatbots, machines and humans) by treating them all as rational agents who and which have beliefs and desires. Why do that? This is said to simplify complex systems by assuming they’ll act to achieve goals based on their beliefs and desires. This also ties in with the earlier remarks about anthropomorphism being “predictive”.
Here’s another mundane and even laughable example. Take the case of robots carrying out childcare or driving a car. Now firstly imagine a robot with a face and name driving a car or looking after a child. Now imagine a robot without a name, voice and face doing exactly the same thing.
The Evil AI Furby
If I can refer back to a previous essay in which I discussed an AI Furby. The presenter of a YouTube programme (called ‘ChatGPT in a kids robot does exactly what experts warned’) seemed to recognise the instinct for anthropomorphism in human beings. He said:
“I know that AI is only playing a character, but it may as well be real, you know, because people can still use it like that.
“Roleplaying is just putting an AI’s capability inside a character mask.”
So let me put the frequent anthropomorphism of this video in a little context. Take these words, which are also from the presenter:
“People [in the late 1990s] claimed their Furbies were giving them secret messages and listening to them.”
This is a reference to the (non-AI) Furby of 1998, some 14 years before the “AI revolution” of 2012. This highlights the anthropomorphic bent of the human species. In this case, the paranoia is very familiar. It was brought about by many people misunderstanding the toy’s technology, high-tech anxiety, and even the “demonic” reputation the 1998 Furby developed.
In terms of examples. When the batteries of Furbies ran low, this caused their speech to deepen and slow down. That was the “demonic voice” and “death rattle”. There was also a concern that if you were “mean” to a Furby, it would “learn” to be mean back.
Conclusion
Two extremes can be cited when it comes to chatbots. On the positive side, chatbots can be used productively. On the negative side, some individuals fall in love with chatbots (or the roles they play under prompting). Arguably, even a positive use of a chatbot may include — or even require — a certain degree of anthropomorphism. This raises a point which was expressed above. If there’s a conscious (if not articulated) suspension of belief in these productive cases, is this still anthropomorphism?
Note:
Since many believe that emergence is real, then this isn’t a surprise. (Whether this is strong or weak emergence is another matter.) After all, with a vast amount of data, numerous abstract and physical operations, hardware, software, etc., then why can’t something strange or unknown happen to an AI entity which could be classed as emergent? That said, it needn’t be ontologically emergent, only epistemically so.
No comments:
Post a Comment