Saturday, 20 December 2025

Spock, Grok, and AI: Logic and Emotion

 


The following essay discusses Spock’s logical nature, as it can be compared to AI entities and human persons. Much of this is seen through the words of one specific AI entity — Grok 3. Indeed, this essay involves philosophical debates with Grok 3 on the nature of Spock and AI. In that sense, it’s self-referential. (Or at least an AI entity discussing AI is self-referential.)

David Hume debating with Spock. Image by Grok 3, under the instructions (or “prompts”) of the writer.
“‘Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.”

David Hume, Treatise of Human Nature (II.III.III).


The following citations from, and debate with, Grok 3 may well raise a few eyebrows. Some critics may even see it as the writer being lazy… or worse. Yet the fact is that I often find debates with Grok 3 to be more fruitful and enlightening than the philosophical and political debates I have with human persons on social media. Indeed, I find the level of debate superior to that of many professional philosophers. What’s more, Grok 3’s prose style is usually easier to read. (It’s main vice is its endless recapping, something which human persons rarely do in such an obvious and annoying way.)

All that said, I’m not sure if I would ever go as far as a fellow writer on Medium. In his article, ‘How I beat ChatGPT… for now’, Roderick Graham (a Professor of Sociology at Old Dominion University) candidly says:

“In my courses, I encourage students to use ChatGPT to complete their assignments. I do not even require them to tell me when or how they use the application. [ ] [I] show students during class how I used ChatGPT to generate some of the course materials we are using!”

[See here for the many educators who agree with Graham.]

Graham then defends (or justifies) his position in this way:

“And yet, I believe I am getting higher quality work out of my students and they are learning more. In this way, I can say I have ‘beaten’ ChatGPT…for now. [ ]
“By building in context and corporeality, I’m asking students to situate knowledge in real circumstances and to engage their bodies and senses in the learning process. These are dimensions of education that remain stubbornly, and beautifully, human.
“ChatGPT doesn’t end higher education. At least not yet. In fact, it clears space for us to move past rote memorization and into what really matters: creativity, contextual thinking, and embodied engagement. Far from undermining originality, AI can make it easier for educators to design assignments that are richer, more demanding, and more human.”

How Logical Was Spock?

Why does logical inquiry matter? Does logic itself show us that logical inquiry matters?

Spock often stressed his own logical nature. Other members of the crew (specifically Doctor McCoy) picked up on his “purely logical” nature too. Yet if Spock had been purely logical, then why would he have been loyal to Captain Kirk, the Enterprise and Starfleet? Why would he have done anything at all?

Grok 3 believes that Spock might have been loyal to Captain Kirk, the Starfleet and the Enterprise even if he weren’t “half human”. It has it that Spock’s

“actions would be driven purely by rational calculations of duty, utility, and Starfleet principles”.

Could Spock’s loyalty to Captain Kirk be cashed out as “the logical thing to do” at the time and over time? But what does that mean? That Kirk himself had access to the Logical? That the Enterprise was driven by “the greater good, or optimal survival probabilities for the Enterprise crew”? Again, where does logic fit into all this?

Of course, there may be logical ways to arrive at the greater good. However, caring about the greater good, and the greater good itself, aren’t themselves logical properties or conditions.

When it comes to human persons, most — or even all — are logical when it comes to certain issues and certain actions. Specifically, doesn’t almost everyone judge whether or not the “benefits outweigh the costs”? Even the beliefs and actions of utopians or ideological fanatics take into consideration benefits and costs. Of course, questions may need to be asked about how benefits and costs are cashed out.

Why Would Spock Care About Duty?

Grok’s answer:

“Spock would ‘care’ about duty not out of emotional attachment but because duty, as defined by Starfleet, represents a rational framework for achieving optimal outcomes.”

This statement raises a question about the reasons for accepting the aim of achieving optimal outcomes. It may be logical to abide by duty in certain circumstances. But what about the reasons for duty itself, as well as for the outcomes of such duty?

Perhaps duty squares with what Grok calls the “utilitarian calculus” for the “greater good”. But as the Scottish philosopher David Hume put it:

“‘Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.”

The general point is that without emotion (or feelings), outcomes wouldn’t have mattered to Spock. If he had been programmed, then that would be a different issue. AI entities can be programmed to “care” about all sorts of things. Indeed, they can even be programmed to endorse the Overton window and/or the greater good. [See my Grok 3 Looks Through the Overton Window’.]

Grok 3 on Utilitarianism

Grok 3 used the word “utilitarian” a few times without me bringing this ian up. It defined the word in this way:

“In this context, ‘utilitarian’ refers to a decision-making approach that prioritizes outcomes based on maximizing overall benefit or minimizing harm, often quantified as the ‘greatest good for the greatest number.’ It’s a rational framework where actions are judged by their consequences, not emotions or intrinsic morality.”

The utilitarian principle hinges on what Grok calls “the greatest benefit for the most relevant parties”. This means that utilitarianism only works on what’s already been philosophically, morally and politically decided upon. (In Spock’s case, that greatest good is that of the crew of the Enterprise, the or Starfleet, etc.) In Grok’s own detail, Spock would

“preserve his own life to continue contributing to Starfleet’s mission, not out of self-interest but because his skills (e.g., scientific expertise) maximize group success”.

All this also applies to human beings. And it needn’t have anything to do with philosophical utilitarianism, or, perhaps, any kind of utilitarianism.

Vulcan Utilitarianism?

Utilitarianism isn’t necessarily tied to happiness, well-being or pleasure, though historically it has been. This may mean that Vulcans could indeed be utilitarians.

Again, traditional philosophical utilitarians deemed happiness and pleasure to be the main things to maximise. Thus, in that sense, Vulcans aren’t traditional utilitarians, which is what Grok recognised in the following words:

“Vulcan philosophy, rooted in Surak’s teachings (Star Trek: Enterprise, “Kir’Shara”), doesn’t explicitly aim to maximize happiness or pleasure, as classical utilitarianism does. Instead, it values logical consistency, order, and the preservation of knowledge or society, which may align with utilitarian outcomes but isn’t driven by a happiness metric.”

According to Grok, logical consistency, order, and the preservation of knowledge/society are the primary aims of Vulcans, not happiness or pleasure. Of course, if happiness or pleasure were the primary aims, then the emotional reality of Vulcans would be obvious…

But not so quick!

Now the same type of question (as was asked earlier) can be asked again:

Why do Vulcans aim at (or value) logical consistency, order, and the preservation of knowledge/society?

Why aren’t Vulcans primarily concerned with their families or even with themselves (i.e., as individuals)? This question can (kind of) be inverted by saying that most human persons, on the other hand, are primarily concerned with their families and/or themselves, but often claim to be primarily concerned with, say, justice, peace, the nation, their race/class, etc.

This is where the emotions of Vulcans kick in, despite their image (or self-image) of being “logical beings”.

Vulcans and Mass Murder

Spock’s adherence to the Vulcan precept that “the needs of the many outweigh the needs of the few” has many implications. For example, I raised the possibility that Vulcans could become mass murderers. Grok agreed when it stated the following:

“Unlike humans, who might hesitate due to empathy or moral qualms, a fully logical Vulcan lacks emotional barriers to extreme actions. If mass murder aligns with their calculated optimal outcome (e.g., preventing a war or resource depletion), they could pursue it without guilt. For instance, a Vulcan might logically conclude that exterminating a hostile species threatening the Federation is preferable to prolonged conflict, as long as the math checks out.”

There is some agreement on this on utilitarianism and mass murder. For example, Laurie Calhoun (in her articleKILLING, LETTING DIE, AND THE ALLEGED NECESSITY OF MILITARY INTERVENTION’) wrote:

“Consistent utilitarians are ready and willing even to kill innocent people, if necessary. The alleged permissibility of ‘collateral damage’, and the euphemistic manner in which it is described by the military and the media alike, is perfectly in keeping with the utilitarian outlook. [ ] If more people will die if one does nothing than if one goes to war, then, in this view, one is morally obliged to go to war.
“[ ] Intentions are, strictly speaking, morally irrelevant in utilitarianism. If a military campaign does not lead to an overall improvement in the state of affairs for all members of the moral community, then, according to classical utilitarianism, the executors of war have acted wrongly, even if they had the best of intentions.”

If we return to Grok. It goes into scary (as well as predictable) self-referential territory when it “admits” that an AI

“might similarly justify extreme actions if its algorithm calculates that killing many saves more (e.g., a hypothetical AI managing resources during a crisis)”.

I then made the obvious point that human rationality has led to mass murder too. (Grok itself mentions “the atomic bombings of Hiroshima and Nagasaki”.) Yet perhaps there’s a conflation here of rationality with logicality. As an instance of this, philosophers like Theodor Adorno and Max Horkheimer focussed their attention on the evils of “instrumental rationality”, not instrumental logicality. Of course, both are tied together. (Can you even have rationality without logicality?) Thus, Adorno and Horkheimer wrote (in their Dialectic of Enlightenment):

“Reason serves as a universal tool for the fabrication of all other tools, rigidly purpose-directed and as calamitous as the precisely calculated operations of material production, the results of which for human beings escape all calculation.”

In this sense, then, it can be said that logicality is a tool of rationality.

Are Even Vulcans Purely Logical?

In the early shows of Star Trek, “full Vulcans” aren’t really convincing as being purely logical beings, and for many reasons. That said, the (seemingly?) fully logical nature of Vulcans is displayed by the case of Sarek in Star Trek III: The Search for Spock. As Grok told me, Vulcans

“pursue outcomes like Spock’s resurrection not out of love but because [Sarek’s] son’s knowledge and potential serve Vulcan and Federation goals”.

According to Grok, Vulcans “suppress emotions through discipline, not absence”. In more detail:

“Vulcans’ ability to suppress emotions relies on hormonal and neurological discipline, not an absence of feeling. This biological basis for control requires constant effort, unlike AI, which operates without such internal struggles.”

At another point Grok argued that

“[e]ven without emotion, a Vulcan might deduce that existence, coherence, or knowledge is preferable to nothingness or disorder”.

Yet elsewhere Grok also argued that Spock sacrificed himself for logical reasons. Can’t sacrificing oneself be seen as the least rational thing anyone could do?

Readers can play the same game here and state that Vulcans suppress their emotions for… emotional reasons. After all, why supress emotions at all? It isn’t always the case that emotions get in the way of logical outcomes. And, as before, it may be logical to kill every possible enemy in the galaxy.

We can conclude that even though Grok ended with the words “unlike AI, which operates without such internal struggles”, emotions — even suppressed emotions — don’t really save the day here. Thus, AI entities may be in a better (ethical) position than Vulcans and human beings when it comes to the idea (or the logic) of killing every possible enemy.

Programming AI, Vulcans… and Humans

In my debate with Grok there was sometimes a problem with distinguishing Vulcans from AI entities. For example, Grok argued the following:

“While not machines, Vulcans are conditioned from childhood through rigorous training (e.g., the Kolinahr) to internalize logical principles as their core directive. This conditioning isn’t emotion but a deeply ingrained rational commitment to outcomes that align with Vulcan values.”

Does this mean that Vulcans are literally programmed by their educators? Aren’t human persons at least partially programmed by their educators too? Indeed, the Nazis and Soviet Communists were programmed, like Spock, to “prioritize collective welfare”.

Grok doesn’t think so. Firstly:

“An AI’s outputs are deterministic or probabilistically predictable based on its programming and inputs.”

Yet, when it comes to Vulcans, as well as humans, Grok also stated the following:

“While their education shapes them, Vulcans can reflect on and adapt their logical frameworks. They’re capable of philosophical introspection, questioning their axioms or adjusting priorities based on new reasoning.”

Is Grok 3 assuming indeterminism when it comes to human beings and Vulcans? Thus, Grok 3, in layman’s terms, is implicitly stressing the importance of human free will. Yet it also acknowledges that free will and emotion (or at least strong feelings) have often led to mass murder. So is AI really more of a threat to humanity than humanity itself?

No comments:

Post a Comment