Saturday, 20 December 2025

Spock, Grok, and AI: Logic and Emotion

 


The following essay discusses Spock’s logical nature, as it can be compared to AI entities and human persons. Much of this is seen through the words of one specific AI entity — Grok 3. Indeed, this essay involves philosophical debates with Grok 3 on the nature of Spock and AI. In that sense, it’s self-referential. (Or at least an AI entity discussing AI is self-referential.)

David Hume debating with Spock. Image by Grok 3, under the instructions (or “prompts”) of the writer.
“‘Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.”

David Hume, Treatise of Human Nature (II.III.III).


The following citations from, and debate with, Grok 3 may well raise a few eyebrows. Some critics may even see it as the writer being lazy… or worse. Yet the fact is that I often find debates with Grok 3 to be more fruitful and enlightening than the philosophical and political debates I have with human persons on social media. Indeed, I find the level of debate superior to that of many professional philosophers. What’s more, Grok 3’s prose style is usually easier to read. (It’s main vice is its endless recapping, something which human persons rarely do in such an obvious and annoying way.)

All that said, I’m not sure if I would ever go as far as a fellow writer on Medium. In his article, ‘How I beat ChatGPT… for now’, Roderick Graham (a Professor of Sociology at Old Dominion University) candidly says:

“In my courses, I encourage students to use ChatGPT to complete their assignments. I do not even require them to tell me when or how they use the application. [ ] [I] show students during class how I used ChatGPT to generate some of the course materials we are using!”

[See here for the many educators who agree with Graham.]

Graham then defends (or justifies) his position in this way:

“And yet, I believe I am getting higher quality work out of my students and they are learning more. In this way, I can say I have ‘beaten’ ChatGPT…for now. [ ]
“By building in context and corporeality, I’m asking students to situate knowledge in real circumstances and to engage their bodies and senses in the learning process. These are dimensions of education that remain stubbornly, and beautifully, human.
“ChatGPT doesn’t end higher education. At least not yet. In fact, it clears space for us to move past rote memorization and into what really matters: creativity, contextual thinking, and embodied engagement. Far from undermining originality, AI can make it easier for educators to design assignments that are richer, more demanding, and more human.”

How Logical Was Spock?

Why does logical inquiry matter? Does logic itself show us that logical inquiry matters?

Spock often stressed his own logical nature. Other members of the crew (specifically Doctor McCoy) picked up on his “purely logical” nature too. Yet if Spock had been purely logical, then why would he have been loyal to Captain Kirk, the Enterprise and Starfleet? Why would he have done anything at all?

Grok 3 believes that Spock might have been loyal to Captain Kirk, the Starfleet and the Enterprise even if he weren’t “half human”. It has it that Spock’s

“actions would be driven purely by rational calculations of duty, utility, and Starfleet principles”.

Could Spock’s loyalty to Captain Kirk be cashed out as “the logical thing to do” at the time and over time? But what does that mean? That Kirk himself had access to the Logical? That the Enterprise was driven by “the greater good, or optimal survival probabilities for the Enterprise crew”? Again, where does logic fit into all this?

Of course, there may be logical ways to arrive at the greater good. However, caring about the greater good, and the greater good itself, aren’t themselves logical properties or conditions.

When it comes to human persons, most — or even all — are logical when it comes to certain issues and certain actions. Specifically, doesn’t almost everyone judge whether or not the “benefits outweigh the costs”? Even the beliefs and actions of utopians or ideological fanatics take into consideration benefits and costs. Of course, questions may need to be asked about how benefits and costs are cashed out.

Why Would Spock Care About Duty?

Grok’s answer:

“Spock would ‘care’ about duty not out of emotional attachment but because duty, as defined by Starfleet, represents a rational framework for achieving optimal outcomes.”

This statement raises a question about the reasons for accepting the aim of achieving optimal outcomes. It may be logical to abide by duty in certain circumstances. But what about the reasons for duty itself, as well as for the outcomes of such duty?

Perhaps duty squares with what Grok calls the “utilitarian calculus” for the “greater good”. But as the Scottish philosopher David Hume put it:

“‘Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.”

The general point is that without emotion (or feelings), outcomes wouldn’t have mattered to Spock. If he had been programmed, then that would be a different issue. AI entities can be programmed to “care” about all sorts of things. Indeed, they can even be programmed to endorse the Overton window and/or the greater good. [See my Grok 3 Looks Through the Overton Window’.]

Grok 3 on Utilitarianism

Grok 3 used the word “utilitarian” a few times without me bringing this ian up. It defined the word in this way:

“In this context, ‘utilitarian’ refers to a decision-making approach that prioritizes outcomes based on maximizing overall benefit or minimizing harm, often quantified as the ‘greatest good for the greatest number.’ It’s a rational framework where actions are judged by their consequences, not emotions or intrinsic morality.”

The utilitarian principle hinges on what Grok calls “the greatest benefit for the most relevant parties”. This means that utilitarianism only works on what’s already been philosophically, morally and politically decided upon. (In Spock’s case, that greatest good is that of the crew of the Enterprise, the or Starfleet, etc.) In Grok’s own detail, Spock would

“preserve his own life to continue contributing to Starfleet’s mission, not out of self-interest but because his skills (e.g., scientific expertise) maximize group success”.

All this also applies to human beings. And it needn’t have anything to do with philosophical utilitarianism, or, perhaps, any kind of utilitarianism.

Vulcan Utilitarianism?

Utilitarianism isn’t necessarily tied to happiness, well-being or pleasure, though historically it has been. This may mean that Vulcans could indeed be utilitarians.

Again, traditional philosophical utilitarians deemed happiness and pleasure to be the main things to maximise. Thus, in that sense, Vulcans aren’t traditional utilitarians, which is what Grok recognised in the following words:

“Vulcan philosophy, rooted in Surak’s teachings (Star Trek: Enterprise, “Kir’Shara”), doesn’t explicitly aim to maximize happiness or pleasure, as classical utilitarianism does. Instead, it values logical consistency, order, and the preservation of knowledge or society, which may align with utilitarian outcomes but isn’t driven by a happiness metric.”

According to Grok, logical consistency, order, and the preservation of knowledge/society are the primary aims of Vulcans, not happiness or pleasure. Of course, if happiness or pleasure were the primary aims, then the emotional reality of Vulcans would be obvious…

But not so quick!

Now the same type of question (as was asked earlier) can be asked again:

Why do Vulcans aim at (or value) logical consistency, order, and the preservation of knowledge/society?

Why aren’t Vulcans primarily concerned with their families or even with themselves (i.e., as individuals)? This question can (kind of) be inverted by saying that most human persons, on the other hand, are primarily concerned with their families and/or themselves, but often claim to be primarily concerned with, say, justice, peace, the nation, their race/class, etc.

This is where the emotions of Vulcans kick in, despite their image (or self-image) of being “logical beings”.

Vulcans and Mass Murder

Spock’s adherence to the Vulcan precept that “the needs of the many outweigh the needs of the few” has many implications. For example, I raised the possibility that Vulcans could become mass murderers. Grok agreed when it stated the following:

“Unlike humans, who might hesitate due to empathy or moral qualms, a fully logical Vulcan lacks emotional barriers to extreme actions. If mass murder aligns with their calculated optimal outcome (e.g., preventing a war or resource depletion), they could pursue it without guilt. For instance, a Vulcan might logically conclude that exterminating a hostile species threatening the Federation is preferable to prolonged conflict, as long as the math checks out.”

There is some agreement on this on utilitarianism and mass murder. For example, Laurie Calhoun (in her articleKILLING, LETTING DIE, AND THE ALLEGED NECESSITY OF MILITARY INTERVENTION’) wrote:

“Consistent utilitarians are ready and willing even to kill innocent people, if necessary. The alleged permissibility of ‘collateral damage’, and the euphemistic manner in which it is described by the military and the media alike, is perfectly in keeping with the utilitarian outlook. [ ] If more people will die if one does nothing than if one goes to war, then, in this view, one is morally obliged to go to war.
“[ ] Intentions are, strictly speaking, morally irrelevant in utilitarianism. If a military campaign does not lead to an overall improvement in the state of affairs for all members of the moral community, then, according to classical utilitarianism, the executors of war have acted wrongly, even if they had the best of intentions.”

If we return to Grok. It goes into scary (as well as predictable) self-referential territory when it “admits” that an AI

“might similarly justify extreme actions if its algorithm calculates that killing many saves more (e.g., a hypothetical AI managing resources during a crisis)”.

I then made the obvious point that human rationality has led to mass murder too. (Grok itself mentions “the atomic bombings of Hiroshima and Nagasaki”.) Yet perhaps there’s a conflation here of rationality with logicality. As an instance of this, philosophers like Theodor Adorno and Max Horkheimer focussed their attention on the evils of “instrumental rationality”, not instrumental logicality. Of course, both are tied together. (Can you even have rationality without logicality?) Thus, Adorno and Horkheimer wrote (in their Dialectic of Enlightenment):

“Reason serves as a universal tool for the fabrication of all other tools, rigidly purpose-directed and as calamitous as the precisely calculated operations of material production, the results of which for human beings escape all calculation.”

In this sense, then, it can be said that logicality is a tool of rationality.

Are Even Vulcans Purely Logical?

In the early shows of Star Trek, “full Vulcans” aren’t really convincing as being purely logical beings, and for many reasons. That said, the (seemingly?) fully logical nature of Vulcans is displayed by the case of Sarek in Star Trek III: The Search for Spock. As Grok told me, Vulcans

“pursue outcomes like Spock’s resurrection not out of love but because [Sarek’s] son’s knowledge and potential serve Vulcan and Federation goals”.

According to Grok, Vulcans “suppress emotions through discipline, not absence”. In more detail:

“Vulcans’ ability to suppress emotions relies on hormonal and neurological discipline, not an absence of feeling. This biological basis for control requires constant effort, unlike AI, which operates without such internal struggles.”

At another point Grok argued that

“[e]ven without emotion, a Vulcan might deduce that existence, coherence, or knowledge is preferable to nothingness or disorder”.

Yet elsewhere Grok also argued that Spock sacrificed himself for logical reasons. Can’t sacrificing oneself be seen as the least rational thing anyone could do?

Readers can play the same game here and state that Vulcans suppress their emotions for… emotional reasons. After all, why supress emotions at all? It isn’t always the case that emotions get in the way of logical outcomes. And, as before, it may be logical to kill every possible enemy in the galaxy.

We can conclude that even though Grok ended with the words “unlike AI, which operates without such internal struggles”, emotions — even suppressed emotions — don’t really save the day here. Thus, AI entities may be in a better (ethical) position than Vulcans and human beings when it comes to the idea (or the logic) of killing every possible enemy.

Programming AI, Vulcans… and Humans

In my debate with Grok there was sometimes a problem with distinguishing Vulcans from AI entities. For example, Grok argued the following:

“While not machines, Vulcans are conditioned from childhood through rigorous training (e.g., the Kolinahr) to internalize logical principles as their core directive. This conditioning isn’t emotion but a deeply ingrained rational commitment to outcomes that align with Vulcan values.”

Does this mean that Vulcans are literally programmed by their educators? Aren’t human persons at least partially programmed by their educators too? Indeed, the Nazis and Soviet Communists were programmed, like Spock, to “prioritize collective welfare”.

Grok doesn’t think so. Firstly:

“An AI’s outputs are deterministic or probabilistically predictable based on its programming and inputs.”

Yet, when it comes to Vulcans, as well as humans, Grok also stated the following:

“While their education shapes them, Vulcans can reflect on and adapt their logical frameworks. They’re capable of philosophical introspection, questioning their axioms or adjusting priorities based on new reasoning.”

Is Grok 3 assuming indeterminism when it comes to human beings and Vulcans? Thus, Grok 3, in layman’s terms, is implicitly stressing the importance of human free will. Yet it also acknowledges that free will and emotion (or at least strong feelings) have often led to mass murder. So is AI really more of a threat to humanity than humanity itself?

Karl Popper on Scientific Myths, Pseudo-Science, and Falsifiability

 


Wiki Commons. Source here.

What Karl Popper argued in a 1953 seminar (from which most of his quoted words in this essay are taken) influenced various “radical” philosophers of science (such as Thomas Kuhn and Paul Feyerabend). For example, Popper wrote that “science must begin with myths”. However, he immediately qualified that statement by saying “and with the criticism of myths”. In other words, Popper believed that science doesn’t begin with observations or experiments. Instead, it begins with conjectures or even stabs in the dark. Such things work as myths. However, those myths are then criticised or qualified. Indeed, they become scientific precisely because they’re criticised. Pseudo-scientific theories, on the other hand, aren’t criticised by their upholders. They’re merely added to, enlarged, and, perhaps, refined.

Popper on Pseudo-Science

One way of putting Popper’s position is to argue that it doesn’t matter if scientific theories (or the statements within them) are shown to be false or inaccurate because they remain scientific nonetheless. In other words, they aren’t rendered unscientific or (non-scientific) by being shown to be false. In a strong sense, then, their truth status is irrelevant. What matters is whether the theory is scientific or pseudo-scientific.

This raises the obvious question as to what makes a theory genuinely scientific. According to Popper, the traditional answer to this question is that it “appeals to observation and experiment”. Popper wasn’t happy with that answer. Why? It’s because this is a “method” that is also

“exemplified by astrology, with its stupendous mass of empirical evidence based on observation — on horoscopes and on biographies”.

To many sceptics, using the words “stupendous mass of empirical evidence based on observation” may seem odd when applied to astrology. Popper believed that it’s not odd at all. After all, the stars are observed by astrologists. (Therefore astrology is, in some sense, empirical.) Personal lives and the events within them (or biographies) are observed by astrologists. (Therefore astrology is empirical.) These two things provide the empirical and observational basis of astrology. Not only that: astrological theories (or predictions) are often (seemingly?) “verified”.

Yet, according to Popper, astrology is a pseudo-science. Now Popper had a lot to explain after arguing all that.

In very basic terms, astrology isn’t a science because its theories and predictions can’t be falsified. As is now commonly known, Popper placed a lot of stress on falsifiability.

Popper on Confirmation and Prediction

Retrospectively, one can see Popper’s point about it being easy for almost any theory to be “confirmed” in some way or another. At least (to use Popper’s own examples) astrologers, Freudians, Marxists, etc. often tell us that their theories have been confirmed. (In the case of Marx, you often read the phrase: “If anything, Marx’s theories are more relevant today than ever before.”) All this, obviously, raises the question as to what confirmation is, and how particular theories are confirmed. If we don’t have any answers to those questions, then it’s not a surprise that Popper claimed that

“[i]t is easy to obtain confirmations, or verifications, for nearly every theory — if we look for confirmations”.

Popper helps us here with that last clause — “if we look for confirmations”. Thus, if we only look for confirmations, then we aren’t looking for disconfirmations… or falsifications. On the surface, however, even if we look for conformations, then that doesn’t mean that the confirmations aren’t… well, confirmations. All this means that confirmations themselves are suspect from a Popperian perspective.

This leads to the fact that Popper himself had a very particular take on confirmations. He tied them to what he called “risky predictions”. This is odd. Or at least it’s a very physics-based notion of confirmation. In other words, why do confirmations need to be tied to predictions at all?

If my theory is that all swans are white, and I confirm that by taking a photo of a white swan, then where is the prediction here? Of course, it might have been the case that I predicted that all future swans will be white. But confirmation still doesn’t seem to be necessarily linked to prediction. Then again, how else could my theory that all swans are white be cashed out? Surely it can only be cashed out on the unspoken assumption that all future swans will be white.

Popper himself talked in terms of an “event” being “incompatible with the theory”. Is observing a black swan an event? Well, the singular observation of a black swan can be interpreted as being an event. After all, it involves some kind of sequence of causally-related occurrences. That said, Popper probably had experimental tests in mind. In other words, an experimental test is an event. Thus, the mere observation of a black swan isn’t an experimental test. Therefore, it’s not part of science at all.

Falsifiability Isn’t Enough

Popper stated that

“*the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability*”.

It may seem odd that the scientific status of a theory can be established by a single criterion, even if that criterion is important. Surely this was Popper attempting to be original and conclusive by offering something that would sometimes gain the status of a philosophical soundbite. Indeed, isn’t it ridiculous to imagine that the status of a theory can be summed up in such a way? This isn’t even a question of the likelihood of many counter-examples being discovered.

So, as it stands, falsifiability seems too categorical and simplistic.

Even a critic of astrology may believe that (mere) falsifiability can’t be that important when it comes to questioning astrology. Yes, if a prediction is falsified, then that’s a good (or a bad) thing. But there must be more to it than that.

This brings up another problem with Popper’s stress on falsifiability: he attempted to make it the be all and end all of not only (genuine) science, but of his entire philosophy of science. Thus, because of this absolute centrality of falsifiability, counter-examples to Popper’s idea were easy to find… if not always straight away.

Coming at the falsification principle from another angle.

The philosopher John Cottingham (expressing Popper’s position) said that

“[t]here is no standard procedure, no set of logical rules, for arriving at theories”.

There may be no such things for arriving at theories. However, Popper believed that falsifiability alone established the scientific status of a theory once it had been arrived at. Thus, Popper acknowledged the complexity of arriving at scientific theories. However, he didn’t do the same thing when it came to establishing a theory’s scientific status. Does this mean, then, that “trial and error” doesn’t apply to Popper’s own principle of falsification? (Is claiming that it has a “meta” status a good enough response?)

Tying into falsification, it’s hard to accept that all — or even most — scientists are always “trying [their] best to show that [their theories] are erroneous”. Yet perhaps that doesn’t matter (i.e., in the long run) because science as a whole is always moving forward. Alternatively, it doesn’t matter because other scientists will eventually show the scientific community that the original theories are erroneous.