Friday, 27 February 2026

AI Machines, Emotion and the Singularity

 

In the following essay it will be assumed that there will be some kind of (technological) singularity in the future. Of course, this is much debated. However, that debate won’t matter too much within the following context of the relevance of emotions when it comes to AI machines and a possible singularity.

Image from Wiki Commons. Source here.

Ultraintelligent and Docile Machines?

The Singularity is “the proposed point in time at which machines become more intelligent than humans”.

According to many people, the true significance of the Singularity was captured by the British mathematician Irving John Good way back in 1965. He wrote:

“The first ultraintelligent machine is the last invention that man need ever make [ ].”

Is this a positive or negative proclamation on Good’s part?

It depends on your values and beliefs.

Does it follow that even if we were to create ultraintelligent machines, that man need never invent anything again? Not really. Of course, Jack Good might well have meant that men need not invent anything after this event. However, even this isn’t clear because it depends on what Good believed were the concrete consequences of the existence of ultraintelligent machines. After all, one positive consequence may be that despite the designation “ultraintelligent”, such machines still (as it were) feel the need to work with human beings. A negative consequence may be that machines actually stop human beings from inventing anything.

In any case, I purposely left out the final clause from the Good quotation above. The full quote is the following:

“The first ultraintelligent machine is the last invention that man need ever make — *provided that the machine is docile enough to tell us to keep it under control*.”

It’s the content of that last clause which worries so many people.

Firstly, it needs to asked here why Good used the word “machine” in the singular? Why didn’t he refer to “machines” in the plural? It’s odd to believe that a single machine, even if ultraintelligent, could make human invention redundant. It’s a lot less odd if Good were talking about machines in the plural. So perhaps he simply meant that once we have a single ultraintelligent machine, then soon after that we’d have many more ultraintelligent machines too. And that certainly follows.

After all, one important factor in this debate is the ability of intelligent systems to copy themselves. That really does open the floodgates. That’s because if machines can copy themselves, then they can improve themselves too. Thus, these systems no longer need to rely on human beings when it comes to such improvements. (This is already the case in certain instances.) Does this mean that the improvements will come faster? Many believe that it does.

Some sceptics, as well as some AI evangelists, may argue that the juxtaposition of docility and ultraintelligence is (almost?) a contradiction in terms. How can anything that’s ultraintelligent be docile too? (There are many intelligent human beings who are docile.)

Some may conclude, as Good himself did, that this ultraintelligent machine must be programmed to be docile. This raises an obvious question: Why would a ultraintelligent machine abide by such programming in all circumstances? It can be supposed that, ultimately, this is a technical question. It’s certainly a hard question to answer.

Intelligence?

One small problem here is that firstly we need to define the word “intelligence”. That said, however we define that word, the possibility of the Singularity is still with us. It can be supposed that if “emotional intelligence” is part of the package of intelligence, then there may be problems. (See later section.) Yet I doubt that would make much of an impact on the Singularity either.

In any case, AI machines are already more intelligent than humans in various respects. In which respects?… Yes, it’s here that we must manoeuvre back to the question of defining the word “intelligence” again. So let’s completely forget about defining “intelligence”…

AIs have direct access to more data than human beings. They often have quicker reasoning skills. They’re often better and quicker at constructing sentences. They can solve mathematical problems quicker than most human beings. Etc.

Indeed, many of these realities hint at the Singularity too.

AI Machines, Emotion and the Singularity

Earlier on “emotional intelligence” was mentioned. Many would argue that so far artificial intelligence has been “all about logicality”, not emotional or social intelligence. It’s hard to shoehorn in a discussion on emotional intelligence here. However, even without emotional intelligence, the Singularity may still occur. So it can be asked if the Singularity occurs without AI machines instantiating emotional intelligence, then would that automatically be a bad thing?

The thing is, aren’t human beings emotional in many different — sometimes contradictory — ways? Aren’t there varying degrees of human emotional intelligence? Added to that is the fact that emotion is often fused with belief and values. It rarely comes free. The basic upshot, then, is that when it comes to human beings, emotion isn’t always a good thing. So why would it automatically be a bad thing if AI machines didn’t have emotional intelligence or even emotions pure and simple?

One could even argue that a lack of emotion in AI machines is a positive in that it’s doubtful that without emotion, such machines would feel the need to wipe out human kind. It’s certainly the case that all human examples of genocide, mass murder, torture, etc. have been at least largely the result of human emotion. So rather than emotion curtailing such things, they often actually “encourage” them. All that said, who’s to say that annihilation can’t be a purely rational choice?

This is a good time to bring up Star Trek’s Spock, who was half human and half Vulcan.

Vulcans are often deemed to be “purely logical” beings, whereas humans are creatures of emotion. Yet it’s clear, according to the writers and “experts”, that Vulcans aren’t purely logical at all. Instead, they’re deemed to control or even “supress” their emotions.

AI Machines, Vulcans and Mass Murder

In a conversation with a chatbot, Spock’s adherence to the Vulcan quasi-utilitarian precept that “the needs of the many outweigh the needs of the few” was mentioned. I raised the possibility that “purely logical” Vulcans could become mass murderers. The chatbot agreed when it stated the following:

“Unlike humans, who might hesitate due to empathy or moral qualms, a fully logical Vulcan lacks emotional barriers to extreme actions. If mass murder aligns with their calculated optimal outcome (e.g., preventing a war or resource depletion), they could pursue it without guilt.”

There is some agreement with this position on utilitarianism and mass murder. For example, Laurie Calhoun (in her article ‘KILLING, LETTING DIE, AND THE ALLEGED NECESSITY OF MILITARY INTERVENTION’) wrote:

“Consistent utilitarians are ready and willing even to kill innocent people, if necessary. [ ] If more people will die if one does nothing than if one goes to war, then, in this view, one is morally obliged to go to war.”

To state what should be obvious, moral qualms and even empathy haven’t necessarily got in the way of extreme actions when it comes to human beings. Indeed, moral qualms can actually lead to extreme actions. As for human empathy, isn’t it often very selective?

Thus, perhaps the limitations placed on AI machines and Vulcans by logic and rationality are stronger than the limitations placed on human beings by their moral qualms and empathy.

As for Vulcans or AI machines logically concluding that exterminating a hostile species is preferable to prolonged conflict, then how many times in human history have (emotional) human beings exterminated hostile forces, cultures and communities on the pretext that the alternatives to doing so were even worse? Although in these cases utilitarianism was never mentioned, there was still a kind of utilitarian logic that at least partially underlined such exterminations.

If we return to the chatbot. It went into scary (as well as rather predictable) self-referential territory when it (as it were) admitted that an AI machine

“might similarly justify extreme actions if its algorithm calculates that killing many saves more (e.g., a hypothetical AI managing resources during a crisis)”.

Again, the obvious point here is that human rationality, logic and emotion have led to mass murder and many other extreme actions too.

No comments:

Post a Comment