… philosophical debaters.
The title above isn’t as extreme as it may at first seem. That’s because it isn’t a claim that chatbots instantiate consciousness, are artificial persons, have a point of view, can write brilliant poetry, etc. (Some AI entities can indeed be said to have minds.) More relevantly, consciousness, personhood, emotion, etc. simply aren’t required when it specifically comes to having a philosophical or political debate with a chatbot. (This essay focuses on Grok 3 and ChatGPT.) Thus, such debates are often superior to those which users have with human persons.

A Medium writer responded to my essay ‘Spock, Grok, and AI: Logic and Emotion’. He seemed to think that I believe that Grok 3 is just like me — at least when it comes to what he called our “meeting of minds”. He also believes that I’ve been hoodwinked into believing that Grok 3 has a “point of view” in exactly the same way in which a human person has a point of view. In Mohammed Brückner’s own words:
“We are not witnessing a meeting of minds, but a clever parody where the AI’s most convincing performance is its impression of having a point of view.”
Firstly, a mind isn’t a natural kind. Arguably, the word “mind” isn’t a scientific term either. Thus, the easy use of the word “mind” above is problematic.
For a start, who says that AI entities haven’t got minds? Is it that they don’t have minds simply because that aren’t human persons? Because they’re non-biological? Or because they don’t instantiate consciousness and emotion?
If we return to Mohammed Brückner’s position. Why does it matter if Grok 3 has a genuine(?) “point of view” — or any point of view at all? One can still gain much from a debate or conversation with Grok 3 without believing that it’s a “meeting of minds”. In addition, users certainly don’t need to believe Grok 3 instantiates consciousness. None of those beliefs are required to gain much from such conversations.
Sure, Grok 3 doesn’t have a human-like point of view. It expresses opinions — many opinions. So perhaps that’s not the same as having a point of view. Of course, we can take this further and even question my use of the words “express opinions”. But here again, genuine (i.e., human or human-like) opinions may not be required when it comes to using Grok 3 — even when it comes to philosophical debates.
Brückner extends his position on my self-delusion in the following poetic and rhetorical passage:
“The entire exchange is less a dialogue and more a hall of mirrors, reflecting our own desire to see humanity in the machine — a desire as persistent, and as often disappointed, as a Vulcan’s raised eyebrow.”
Isn’t the opposite the case? Don’t most people — or just many people — react negatively against seeing humanity in the machine? After all, one can read countless people telling us that AI entities “don’t have minds”, that they “can’t think”, and that “everything they say is a result of programming”.
In any case, a desire to see humanity in the machine isn’t at all necessary. One can be fully aware of the nature of chatbots and still have a dialogue. That said, it may still be productive, entertaining or even satisfying to suspend disbelief for short periods in such conversations. However, this is only equivalent to watching a movie or reading a work of fiction.
Now let’s tackle the words and positions of S Esoof Siddiq.
ChatGPT and Novel Ideas
Siddiq outlines the current situation in the following:
“ChatGPT has already become one of the hottest tools of the day. People use it to code, write, brainstorm, and even make daily decisions. It may sound clever and practical, but several questions arise: Can it ever be really original?”
He then answers his own question:
“No is the easy answer — at least in the sense that human beings are. ChatGPT is not a generator of novel ideas; it operates based on the ability to identify patterns in the data.”
The problematic phrase in all the above is “in the sense that human beings are”. This is almost a demand that in order for ChatGPT to be “original”, it must literally replicate human persons. Thus, no alien could ever be original.
Yet the same kind of argument goes for pain. Someone may ask: Can animals really feel pain? He may answer his own question thus: No, not in the sense that human beings do. Similarly: Can a machine really play tennis? No, not in the sense — or way — that human beings do.
Basically, only human persons can do things, and feel things, the way human persons do things, and feel things. That’s because in order for other entities to do so, then they’d need to actually be human persons.
In the long quote above, we also have the words “ChatGPT is not a generator of novel ideas”. That’s not the case. Personally, I’ve experienced Grok 3 expressing what can be called “novel ideas” on many occasions. In fact, because of its vast database, and the way it’s programmed, Grok 3 is more, not less, likely to do so than human persons.
Moreover, because Grok 3 isn’t ideologically fixated or fanatical (although some people, bizarrely, say that it is), it can juxtapose different ideas (or ways of thinking) that aren’t ordinarily juxtaposed by many human persons. It can also consider alternative opinions without being weighed down by cognitive bias and emotion.
Patterns and Data
S Esoof Siddiq tells us that ChatGPT
“operates based on the ability to identify patterns in the data”.
I do that. Many other human persons do that too.
Sure, in dialogues and debates, human persons do more than that. But Grok 3 does more than that too.
As for identifying a pattern. ChatGPT “guesses the probable word that would follow in a sentence”. Again, human persons do that too.
In a similar vain, Siddiq says that what ChatGPT does “is a form of remixing prior knowledge, not something that has been born of a free-thinking process”. Yet again, isn’t that what human persons do? In fact, in a vague sense, it’s certainly the case that they do remix prior knowledge.
Siddiq also states that ChatGPT is “bound by training data”. Aren’t human persons too? (This is getting a little repetitive.) At least in debates, human persons are at least partially bound by data — even by training data. (Chatbots’ ability to debate is the prime subject of this essay.) Indeed, in many cases, human persons are explicitly trained — as in their upbringings, by their religions and/or political ideologies, etc. Doesn’t our education also bind us — at least to some degree? Finally, don’t the discourses we exist within, and have used up until now, bind us?
Siddiq’s final clause is: “not something that has been born of a free-thinking process”. This is both vague and problematic. It also opens a big can of worms. So perhaps Siddiq is hinting at that extra something which human persons have, but which ChatGPT doesn’t have.
Consciousness and Experience
Siddiq tells us that ChatGPT has “no conscious mind”. Moreover:
“It neither thinks nor makes decisions. It works with probabilities and not ideas.”
The assumption here is that consciousness is required to think and make decisions. Is it? That depends on what thinking is. If thinking is necessarily and automatically tied to consciousness, then the statement that AI chatbots “neither think nor makes decisions” is true by definition. However, this is stipulated position, or even a truth by fiat. It could just as easily be claimed that no entity could think or make decisions if it didn’t have a human brain… or even if it weren’t French.
The belief (sometimes explicit, sometimes implicit) that consciousness is a necessary correlate of thinking and making decisions has just been mentioned. Interestingly, panpsychist philosophers believe that consciousness can be instantiated without ever being accompanied by thought or by any form of cognition. [See my ‘Sabine Hossenfelder Doesn’t Think… About Panpsychism’.)
Siddiq goes on to say that ChatGPT has “no true experience”. Now whereas Siddiq may be wrong to tie thinking and making decisions to consciousness, he is right to tie consciousness to true experience.
As it is, Siddiq also ties experience together with “personal memories, emotions, and life experiences”. The logic here is the following: if ChatGPT doesn’t instantiate consciousness (or experience), then it can’t have any personal memories, emotions, and life experiences.
Chatbots do have memories. But, again, not human-like memories. Only human persons have human-like memories. ChatGPT certainly doesn’t have emotions (although it can mimic emotions). As for life experiences. Since we’ve already tied experience to consciousness, then ChatGPT can’t have any life experiences. (In his definition of “experience”, Siddiq uses the words “life experience”.)
Problems With Grok 3
Sometimes I write “thank you” after a conversation or dialogue. Of course, this isn’t necessary. I’ve even get annoyed at Grok 3, which is silly. Yet this is just like engaging with a fictional character in a computer game’s virtual world. Sure, in these cases, sometimes some people really do believe they are in a real world, or that Grok 3 is a real human person. However, all that isn’t a necessary part of the psychological and intellectual package which comes with debating with chatbots.
[The philosopher David Chalmers argues that virtual worlds are real worlds. See my ‘Virtual Realities Are Real Fakes: Chalmers and Baudrillard on Reality’.]
There’s also the problem of Grok 3’s annoying habit of replicating what has previously been said, as well as forgetting everything that’s been said in another thread or conversation. This is a serious problem, especially for those users who don’t need (or want) to be spoon-fed. Then there’s the problem that unprompted Grok 3 is far too nice and diplomatic. Indeed, it even tends to be a little sycophantic. (For example, it often uses phrases like “that’s a great idea”, “you’re absolutely right”, “that’s an intriguing observation”, “you’ve raised a great point”, “you’re onto something”, etc.) Thus, Grok 3 needs to be prompted in order to make it surgically critical, and then it may jump to the opposite extreme.
Conclusion
Siddiq finished off by telling his readers that they shouldn’t “use [ChatGPT] to write and define the final product”. He adds: “Authors: You must not think it will take the place of your own voice.” Sure. But that’s more about plagiarism than it is about ChatGPT’s originality, creativity, thinking, etc. To state the obvious: if ChatGPT writes and defines the final product, then it’s ChatGPT which has written and defined the final product.
The odd and interesting thing here is that if many people actually believe that they can use chatbots to write and define their final products, then why does it matter that they don’t instantiate consciousness, have human-like minds, human-like memories, experiences, etc? If creating a product is the only goal (or even having a philosophical debate), then none of this extra stuff really matters.
No comments:
Post a Comment