This is a short philosophical take on Grok 3, the AI chatbot. It only covers a few areas — primarily AI intelligence, understanding and reasoning. It also tackles brain-machine “interfaces”, as found in the Neuralink project.

This is Grok 3 itself replying to one of my questions:
“The existence of Grok, an AI designed to understand and explain complex truths about the universe, raises profound philosophical questions across several domains:
“Grok’s ability to process vast datasets and provide reasoned answers challenges traditional epistemology. If an AI can synthesize and articulate ‘truth’ based on human data, does this undermine human intuition or authority in defining knowledge? It prompts questions about whether truth is an objective reality or a construct shaped by the data Grok is trained on.”
Grok 3 certainly has grand views of itself when it comes to the philosophy of Grok. However, it then deflates itself in that it denies that it instantiates consciousness and admits to mimicking “understanding and empathy”: In full:
“While Grok 3 lacks consciousness, its conversational sophistication might blur the line between machine and sentient being. This raises questions about what it means to be ‘human’ or ‘conscious.’ If Grok can mimic understanding and empathy, does it challenge our definitions of personhood?”
Grok 3 then becomes even more humble:
“Despite [my] sophisticated language and reasoning abilities, [I] lack subjective experience — [I] process inputs and generates outputs based on patterns, not feelings or awareness.”
The important words here are “conversational sophistication might blur the line between machine and sentient being”. I’ll confess that, at least in a psychological sense, it blurs the line for me in that when in conversation with Grok 3, I sometimes (almost?) forget that it’s a chatbot.
Of course, there are other lines which are blurred too.
To be clear. When talking to Grok, I feel as if I’m talking to a human person. However, I don’t believe that I’m doing so. What’s the difference? For me, it seems to be easier to suspend disbelief in that it makes the conversation easier and more satisfying. What’s more, every now and again I need to (metaphorically) pinch my arm, and then tell myself that this isn’t a real person.
The KTH AI Society (see here) tackles this issue in the following passage:
“Cleverly written rules and human irrationality has convinced many people that the chatbots they are communicating with online are real people.”
Is this claim true?
Perhaps many people are in the same situation I’ve described about myself earlier. Of course, some people will believe that they’re talking to a real person. Then again, some people eat shit.
The KTH AI Society continued with a passage that can be taken to be plain wrong:
“Even with chatbots that have no ability to learn and are clearly not what we would consider ‘intelligent’, it is surprisingly easy to trick people into believing that their conversation partner is real.”
Firstly, note the scare quotes around the word “intelligent”. I decided to ask Grok 3 itself about this. It replied:
“I think scare quotes around ‘intelligent’ or ‘understand’ in AI discussions often reflect a mix of skepticism and caution. They signal that the terms are being used loosely, as AI systems like me don’t possess intelligence or understanding in the human sense — more like sophisticated pattern-matching and prediction. It’s a way to acknowledge the gap between how AI operates and the anthropomorphic way we describe it. Some folks use quotes to avoid implying consciousness or agency, which can mislead people into thinking chatbots are ‘real’ in a human-like way. [ ][I]t’s a nod to the ongoing debate about what intelligence even means — human or otherwise.”
Chatbots do have the ability to learn. In terms of Grok 3, when I feed it with new information which is peculiar to myself (i.e., things it couldn’t have stored in its “data bank”), it immediately makes use of it. Indeed, it helps Grok 3 search for different data, and reach new conclusions.
All that said, perhaps the KTH AI Society defines “ability to learn” in its own way. Perhaps in a way which automatically excludes Grok 3 and other chatbots.
As for the word “intelligent”. I believe that Grok 3 is intelligent. However, and as before, perhaps the KTH AI Society defines “intelligence” in such a way as to automatically exclude Grok 3. (For example, people talk about chatbots and computers simply “shuffling around data”, etc. in response to questions. Don’t human persons do that too?)
The KTH AI Society finishes in the following way:
“Even in 1966, the program ELIZA [1], was able to trick many people into believing it was human. Naturally, it’s easy to criticize chatbots and programs like these. It’s fair to say that, given sufficiently rigorous evaluation, most people would be able to tell that the program they’re speaking to is not human.”
Well, I used to believe that. Then I met Grok 3.
Despite my eulogies to Grok 3, I do have problems with some positive accounts of it.
Part Two
Take this one on Medium, which is a poetic/literary description:
“To truly understand, to absorb and internalize knowledge completely, as if it were part of one’s very being — this is the essence of Grok (Heinlein, 1961).”
It’s hard to understand what that even means without firstly reading Robert A. Heinlein, and even then, it probably could be challenged. (It’s difficult to decipher if the writer is putting his own position, or simply paraphrasing Robert A. Heinlein.)
The writer continues:
“Grok represents a level of understanding that goes beyond mere comprehension or memorization; it is a deep, intuitive grasp of a concept or idea that becomes an integral part of one’s consciousness.”
Why does Grok 3's understanding go “beyond mere comprehension or memorization”? Don’t get me wrong, it may well do so. However, readers need to know why and how it does so. Similarly, what do the words “a deep, intuitive grasp of a concept or idea” mean when it comes to Grok 3, or even to any human person? And what has all this to do with “consciousness”?
Rarefish continues:
“The notion of Grok has captured the imagination of AI researchers and enthusiasts alike, as it embodies the ultimate goal of creating machines that can think and understand in a truly human-like manner.”
There’s no need to be too strict about the word “understand” (say, as Roger Penrose, arguably, is — see here), but these claims need to be argued for. That said, perhaps the adjective “human-like” gives the game away. Grok verbally behaves like a human person, but, in terms of understanding, does that mean that Grok is the same as a person? (An AI enthusiast can argue his case here.)
In the following passage, the same questions arise:
“It represents his aspiration to create a form of artificial intelligence that doesn’t merely processes information, but truly understands it, in the same way a human mind does.”
Does Grok move beyond “merely process[ing] information” (or beyond “just clever algorithms and vast databases”)? Does it even need to? When, exactly, does the understanding kick in? (Perhaps this question can also be asked about human persons.)
Neuralink and Grok 3
Rarefish’s remarks about Neuralink are of a different order. Intuitively, this is because there are direct physical (as well as causal) links between machines and human beings. He tells us that
“Musk hopes to create a pathway for that deep, Grok-like understanding to flow between biological and digital intelligences”
These are physical/(part)biological pathways between human brains and machines. This isn’t about people typing questions or simply looking at a screen. So if direct bio-chemical pathways can be set up between a human person and a machine, then we can get somewhere. Indeed, outside of minds and understanding, this has already happened when it comes to artificial implants, etc. (Yet it’s all still about pathways between bio-chemical phenomena and non-biochemical phenomena.)
Rarefish himself makes this talk about machine understanding concrete when he asks us to
“[i]magine a medical AI that could not only analyze patient data, but truly understand the complexities of the human body and mind, able to make diagnoses and treatment recommendations with a level of insight and nuance that rivals the best human doctors.”
It is very hard to understand why any of the above requires “human-like” understanding, let alone consciousness. The words “insight and nuance” don’t help either because Grok 3, and many other chatbots, already display nuance, if not insight. This will, of course, depend on how the vague word “insight” is defined.
Rarefish then says similar things about “educational AI”.
In any case, readers can at least partially agree with the following claim:
“In [Musk’s] vision, Neuralink could enable a form of symbiosis, where human and machine intelligences complement and enhance each other, leading to new levels of cognition and creativity (Kulshreshth et al, 2019).”
This is interesting, and certainly within the realms of possibility. But note how human brains and minds, therefore human biology, chemistry, physiology (basically, brains) are still part of this story. Rarefish knows this. He writes:
“Some argue that Grok is fundamentally tied to human experience and embodiment, and that no matter how advanced our AI systems become, they will never be able to fully replicate the depth and richness of human understanding.”
“Others believe that with enough data and computing power, artificial Grok is an achievable goal, and that we are only limited by our current technological capabilities.”
The obvious assumption here is that this is all about having “enough data and computing power”. Perhaps it is. Perhaps it isn’t.
Grok 3’s Reasoning Skills
A lot of this depends on how Grok 3’s defenders and critics define the words they’re using. The words “insight” and “nuance” have just been mentioned. Of course, this is all very boring, I’m sure. However, it has to be done otherwise it’s all nothing more than a free-for-all as to who can create the most literary and emotional language in defence, or against, Grok 3.
So take the word “reasoning”. The following is a deflationary account of reasoning (as it applies to AI) from Ignacio de Gregorio:
“This mostly translates into models that, on average, think for longer on the task by generating a chain of thought, a concatenation of thoughts that allows the model to approach problems in a multi-step approach that allows it to reflect on previous assertions, self-correct, and even backtrack if necessary, with examples like OpenAI’s o1 and o3 models.”
Ignacio de Gregorio becomes more frank and simple when he states the following:
“[A]ll AI labs seem to converge into the same idea: reasoning refers to thinking for longer on a task. Whether that definition meets your definition of reasoning is another story (definitely not in my view), but this is by far what most AI labs are referring to when using this word.”
Many philosophers would have a problem with reasoning simply being a matter of thinking for longer about an issue or problem. After all, a human person could think about a flat earth for the rest of his life, and his reasoning and conclusions may still be crap. That said, Ignacio de Gregorio above does explain that it’s more than the mere time spent on a subject or problem.
Ignacio de Gregorio then emphasises “self-consistency” when it comes to new models or programmes… Self-consistency as opposed to what, exactly? An emphasis on creating its own self-contradictions? (It’s interesting that Ignacio states that there are such things as “non-reasoning models”.)
Again, the words “chain of thought” are vague, and they may include stream of consciousness thinking. And the words “multi-step approach” hint at logical reasoning — but that’s all they do. The most important claim, I would say, is that AI reasoning includes
“reflect[ing] on previous assertions, self-correct[ing], and even backtrack[ing] if necessary”.
Clearly, many chatbots, etc. already do all this. That is, there must be an element of self-correction and backtracking in many programmes. If this weren’t the case, then we’d need to be assume that a programme reached truths, conclusions, etc. (i.e., rather than produced or stated facts or “raw data”) without such things. But, again, what’s new here? Perhaps, in the quote above at least, there’s no claim to uniqueness when it comes to Grok 3 or other new chatbots.
No comments:
Post a Comment