Saturday, 20 December 2025

Even Without Consciousness and Emotion, Chatbots are Superior…

 … philosophical debaters.

The title above isn’t as extreme as it may at first seem. That’s because it isn’t a claim that chatbots instantiate consciousness, are artificial persons, have a point of view, can write brilliant poetry, etc. (Some AI entities can indeed be said to have minds.) More relevantly, consciousness, personhood, emotion, etc. simply aren’t required when it specifically comes to having a philosophical or political debate with a chatbot. (This essay focuses on Grok 3 and ChatGPT.) Thus, such debates are often superior to those which users have with human persons.

Image by Grok 3, under the writer’s prompt.

A Medium writer responded to my essay Spock, Grok, and AI: Logic and Emotion’. He seemed to think that I believe that Grok 3 is just like me — at least when it comes to what he called our “meeting of minds”. He also believes that I’ve been hoodwinked into believing that Grok 3 has a “point of view” in exactly the same way in which a human person has a point of view. In Mohammed Brückner’s own words:

“We are not witnessing a meeting of minds, but a clever parody where the AI’s most convincing performance is its impression of having a point of view.”

Firstly, a mind isn’t a natural kind. Arguably, the word “mind” isn’t a scientific term either. Thus, the easy use of the word “mind” above is problematic.

For a start, who says that AI entities haven’t got minds? Is it that they don’t have minds simply because that aren’t human persons? Because they’re non-biological? Or because they don’t instantiate consciousness and emotion?

If we return to Mohammed Brückner’s position. Why does it matter if Grok 3 has a genuine(?) “point of view” — or any point of view at all? One can still gain much from a debate or conversation with Grok 3 without believing that it’s a “meeting of minds”. In addition, users certainly don’t need to believe Grok 3 instantiates consciousness. None of those beliefs are required to gain much from such conversations.

Sure, Grok 3 doesn’t have a human-like point of view. It expresses opinions — many opinions. So perhaps that’s not the same as having a point of view. Of course, we can take this further and even question my use of the words “express opinions”. But here again, genuine (i.e., human or human-like) opinions may not be required when it comes to using Grok 3 — even when it comes to philosophical debates.

Brückner extends his position on my self-delusion in the following poetic and rhetorical passage:

“The entire exchange is less a dialogue and more a hall of mirrors, reflecting our own desire to see humanity in the machine — a desire as persistent, and as often disappointed, as a Vulcan’s raised eyebrow.”

Isn’t the opposite the case? Don’t most people — or just many people — react negatively against seeing humanity in the machine? After all, one can read countless people telling us that AI entities “don’t have minds”, that they “can’t think”, and that “everything they say is a result of programming”.

In any case, a desire to see humanity in the machine isn’t at all necessary. One can be fully aware of the nature of chatbots and still have a dialogue. That said, it may still be productive, entertaining or even satisfying to suspend disbelief for short periods in such conversations. However, this is only equivalent to watching a movie or reading a work of fiction.

Now let’s tackle the words and positions of S Esoof Siddiq.

ChatGPT and Novel Ideas

Siddiq outlines the current situation in the following:

“ChatGPT has already become one of the hottest tools of the day. People use it to code, write, brainstorm, and even make daily decisions. It may sound clever and practical, but several questions arise: Can it ever be really original?”

He then answers his own question:

“No is the easy answer — at least in the sense that human beings are. ChatGPT is not a generator of novel ideas; it operates based on the ability to identify patterns in the data.”

The problematic phrase in all the above is “in the sense that human beings are”. This is almost a demand that in order for ChatGPT to be “original”, it must literally replicate human persons. Thus, no alien could ever be original.

Yet the same kind of argument goes for pain. Someone may ask: Can animals really feel pain? He may answer his own question thus: No, not in the sense that human beings do. Similarly: Can a machine really play tennis? No, not in the sense — or way — that human beings do.

Basically, only human persons can do things, and feel things, the way human persons do things, and feel things. That’s because in order for other entities to do so, then they’d need to actually be human persons.

In the long quote above, we also have the words “ChatGPT is not a generator of novel ideas”. That’s not the case. Personally, I’ve experienced Grok 3 expressing what can be called “novel ideas” on many occasions. In fact, because of its vast database, and the way it’s programmed, Grok 3 is more, not less, likely to do so than human persons.

Moreover, because Grok 3 isn’t ideologically fixated or fanatical (although some people, bizarrely, say that it is), it can juxtapose different ideas (or ways of thinking) that aren’t ordinarily juxtaposed by many human persons. It can also consider alternative opinions without being weighed down by cognitive bias and emotion.

Patterns and Data

S Esoof Siddiq tells us that ChatGPT

“operates based on the ability to identify patterns in the data”.

I do that. Many other human persons do that too.

Sure, in dialogues and debates, human persons do more than that. But Grok 3 does more than that too.

As for identifying a pattern. ChatGPT “guesses the probable word that would follow in a sentence”. Again, human persons do that too.

In a similar vain, Siddiq says that what ChatGPT does “is a form of remixing prior knowledge, not something that has been born of a free-thinking process”. Yet again, isn’t that what human persons do? In fact, in a vague sense, it’s certainly the case that they do remix prior knowledge.

Siddiq also states that ChatGPT is “bound by training data”. Aren’t human persons too? (This is getting a little repetitive.) At least in debates, human persons are at least partially bound by data — even by training data. (Chatbots’ ability to debate is the prime subject of this essay.) Indeed, in many cases, human persons are explicitly trained — as in their upbringings, by their religions and/or political ideologies, etc. Doesn’t our education also bind us — at least to some degree? Finally, don’t the discourses we exist within, and have used up until now, bind us?

Siddiq’s final clause is: “not something that has been born of a free-thinking process”. This is both vague and problematic. It also opens a big can of worms. So perhaps Siddiq is hinting at that extra something which human persons have, but which ChatGPT doesn’t have.

Consciousness and Experience

Siddiq tells us that ChatGPT has “no conscious mind”. Moreover:

“It neither thinks nor makes decisions. It works with probabilities and not ideas.”

The assumption here is that consciousness is required to think and make decisions. Is it? That depends on what thinking is. If thinking is necessarily and automatically tied to consciousness, then the statement that AI chatbots “neither think nor makes decisions” is true by definition. However, this is stipulated position, or even a truth by fiat. It could just as easily be claimed that no entity could think or make decisions if it didn’t have a human brain… or even if it weren’t French.

The belief (sometimes explicit, sometimes implicit) that consciousness is a necessary correlate of thinking and making decisions has just been mentioned. Interestingly, panpsychist philosophers believe that consciousness can be instantiated without ever being accompanied by thought or by any form of cognition. [See my ‘Sabine Hossenfelder Doesn’t Think… About Panpsychism’.)

Siddiq goes on to say that ChatGPT has “no true experience”. Now whereas Siddiq may be wrong to tie thinking and making decisions to consciousness, he is right to tie consciousness to true experience.

As it is, Siddiq also ties experience together with “personal memories, emotions, and life experiences”. The logic here is the following: if ChatGPT doesn’t instantiate consciousness (or experience), then it can’t have any personal memories, emotions, and life experiences.

Chatbots do have memories. But, again, not human-like memories. Only human persons have human-like memories. ChatGPT certainly doesn’t have emotions (although it can mimic emotions). As for life experiences. Since we’ve already tied experience to consciousness, then ChatGPT can’t have any life experiences. (In his definition of “experience”, Siddiq uses the words “life experience”.)

Problems With Grok 3

Sometimes I write “thank you” after a conversation or dialogue. Of course, this isn’t necessary. I’ve even get annoyed at Grok 3, which is silly. Yet this is just like engaging with a fictional character in a computer game’s virtual world. Sure, in these cases, sometimes some people really do believe they are in a real world, or that Grok 3 is a real human person. However, all that isn’t a necessary part of the psychological and intellectual package which comes with debating with chatbots.

[The philosopher David Chalmers argues that virtual worlds are real worlds. See my Virtual Realities Are Real Fakes: Chalmers and Baudrillard on Reality’.]

There’s also the problem of Grok 3’s annoying habit of replicating what has previously been said, as well as forgetting everything that’s been said in another thread or conversation. This is a serious problem, especially for those users who don’t need (or want) to be spoon-fed. Then there’s the problem that unprompted Grok 3 is far too nice and diplomatic. Indeed, it even tends to be a little sycophantic. (For example, it often uses phrases like “that’s a great idea”, “you’re absolutely right”, “that’s an intriguing observation”, “you’ve raised a great point”, “you’re onto something”, etc.) Thus, Grok 3 needs to be prompted in order to make it surgically critical, and then it may jump to the opposite extreme.

Conclusion

Siddiq finished off by telling his readers that they shouldn’t “use [ChatGPT] to write and define the final product”. He adds: “Authors: You must not think it will take the place of your own voice.” Sure. But that’s more about plagiarism than it is about ChatGPT’s originality, creativity, thinking, etc. To state the obvious: if ChatGPT writes and defines the final product, then it’s ChatGPT which has written and defined the final product.

The odd and interesting thing here is that if many people actually believe that they can use chatbots to write and define their final products, then why does it matter that they don’t instantiate consciousness, have human-like minds, human-like memories, experiences, etc? If creating a product is the only goal (or even having a philosophical debate), then none of this extra stuff really matters.

Bishop Berkeley on Keeping Metaphysics Out of Physics

 


Anglo-Irish philosopher George Berkeley indulged in what later came to be called the philosophy of physics. [See note 1.] Berkeley was influenced by Isaac Newton’s physics. He was also influenced by Newton’s opinions on physics. This is hardly a surprise. The work concentrated upon in this essay (De Motu) was written in 1721. Newton died in 1727. Berkeley and Newton had a serious problem with what they both called “occult qualities”. They wanted to banish them from physics.

George Berkeley, by John Smibert. Wiki commons. Source here.

Berkeley as a Linguistic Philosopher?

Berkeley can’t really be classed as a linguistic philosopher. Nonetheless, he did focus on what he called “linguistic usage” and “terms”. However, unlike the linguistic philosophers of the 1940s to the 1960s, Berkeley’s account of the vices of linguistic usage was just a means to an end. That end being the separation of physics from metaphysics.

If we returned to linguistic usage.

As Voltaire supposedly put it, “If you want to converse with me, first define your terms.” [See note 2.] The following is Berkeley’s own take on this subject:

“In the pursuit of truth we must beware of being misled by terms which we do not rightly understand.”

Even when philosophers and laypeople believe that they do understand the terms they use, they may still be misled by them. So say that a philosopher coins the term “corbottoid”, and defines it strictly as “The spiritual and metaphysical entymerial influence of Jesus H. Corbett on all knowing subjects”. This philosopher understands that word, and even provides a definition of it. However, isn’t he misled by it too? At least he is in the simple sense that it’s a bullshit term.

Berkeley himself went further. He argued that although philosophers believe that they understand the terms they use, many of them don’t actually do so.

Berkeley, almost like a 20th century postmodernist philosopher, focussed on discourse… Well, he didn’t actually use the word “discourse”. However, he did warn us about “all prejudice”, whether “rooted in linguistic usage or in philosophical authority”. He advised us, instead, to “fix our gaze on the very nature of things”. That is Berkeley expressing his empiricism. [See note 3.]

Berkeley then tied his empiricism directly to linguistic usage when he wrote that “no one’s authority ought to rank so high as to set a value on his words and terms unless they are based on clear and certain fact”. This too is an attack on metaphysics, or at least an attack on the metaphysics of what Berkeley later called “the Schoolmen”.

Let’s make all this more concrete.

What terms did Berkeley have in mind at this time?

Berkeley on Metaphysical Terms in Physics

Berkeley cited the following examples: solicitation of gravity, urge, dead forces, force, etc. According to him, following the positions of Isaac Newton, all these are “occult” terms or “occult qualities”.

So what about Newton?

Newton believed that “theories” were acceptable, but that “hypotheses” were (largely) unacceptable. (Newton used these terms in his own 17th-century way.) His term “theory” was used for that which can be “deduced from” what is observed and/or experimentally produced (or noted). Thus, inductive evidence can lead to a theory about such evidence. Thus, Newton didn’t like his own theories being classed as hypotheses.

As already seen, Berkeley used the word “occult”. So too did Newton.

For example, Newton once claim that hypotheses were about what he called “occult qualities”.

What are occult properties?

They’re properties which can’t be observed or “measured”.

Let’s add to this by going forward in time to David Hume.

In terms of examples, Hume had a problem with such “hypothetical entities’” as substance, vacuum, necessary connection and the self. (All this can be found in Hume’s book An Enquiry Concerning Human Understanding.) Newton himself had a problem with such things as the aether and corpuscles.

If we return to Berkeley.

Firstly, Berkeley tackled only two of the terms mentioned above: solicitation and effort. Again, like Newton before him, Berkeley acknowledged that there may well be a pragmatic use for such terms. However, “they must be taken in a metaphorical sense”. Having just said that, he did add that “a philosopher should abstain from metaphor”.

Clearly, this is too strict a position. It can be argued, instead, that philosophers should abstain from those metaphors which they don’t take to be metaphors. If they don’t take them to be metaphors, then there doesn’t seem to be a big problem. (Berkeley might have disagreed with this position too.)

In any case, Berkeley’s problem with these terms was that they “belong properly to animate beings alone”.

The Distinction Between Physics and Metaphysics

Berkeley wasn’t arguing that metaphysics is bullshit. He was arguing something far more simple: keep metaphysics out of physics. So, unlike the later logical positivists, Berkeley believed that there is indeed a role for metaphysics. What he wanted was for physics and metaphysics to put their houses in order. Or at least to know their “bounds”. Let Berkeley speak for himself here:

“Allot to each science [he classed metaphysics as a science] its own province; assign its bounds; accurately distinguish the principles and objects belonging to each. Thus it will be possible to treat them with greater ease and clarity.”

This passage sounds like a missive against interdisciplinary research. (Such a term or discipline didn’t exist back then.) Perhaps it is. That said, it was written in 1721.

Again, Berkeley didn’t deny the relevance of metaphysics. He simply argued that it doesn’t “belong to mechanics or experiment”. Berkeley stated the following:

“In first philosophy or metaphysics we are concerned with incorporeal things, with causes, truth and the existence of things.”

In this case, it was “metaphysical principles” such as “real efficient causes” which don’t belong to physics. However, they can still “serve to define the limits of physics, and in that way to remove imported difficulties and problems”. (Certain contemporary physicists wouldn’t be very happy with Berkeley’s idea that metaphysics defines the limits of physics.)

Berkeley emphasised pragmatics (or “utility”) again when he discussed geometry:

“[G]eometers for the sake of their art make use of many devices which they themselves cannot describe nor find in the nature of things.”

Berkeley went even further in the following passage:

“[M]athematical entities have no stable essence in the nature of things; and they depend on the notion of the definer. Whence the same thing can be explained in different ways.”

Berkeley was arguing that abstract mathematics is fine and dandy. However, once it’s applied to the nature of things, then the situation becomes problematic. (Empiricists still need to confront the nature of mathematics even when it isn’t applied to the nature of things.) So physicists and philosophers must “distinguish mathematical hypotheses from the nature of things [and also] beware of abstractions”.

All that must have been easier to accept than Berkeley’s words about physics. He continued:

“[T]he mechanician makes use of certain abstract and general terms, imagining in bodies force, action, attraction, solicitation, etc., which are of first utility for theories and formulations, as also for computations about motion.”

All this is similar to the theoretical physicist Lee Smolin’s position as expressed in the following:

“[Scientists] showed us how to display the records of these motions in simple diagrams whose axes represent the position and times in a way that is frozen and hence amenable to being studied at our leisure.”

In a related manner, one can add to this, as Nancy Cartwright did, that there is no such thing as a frictionless plane when it comes to rolling a ball down such a thing. (As in the well known experiment connected to Galileo.) In addition, at least at first, there was no empirical reason to believe that quarks existed.

If we return to Berkeley.

He was raising the interesting philosophical problem of the precise relation between the abstract (i.e., mathematics) and the concrete (i.e., physics).

Of course, all physicists recognise the importance of abstract mathematics. However, Berkeley really had his eyes on the notions of force, action, attraction, etc. Yet here again he took a pragmatic position and simply argued that such things “must be considered to be of a kind with other hypotheses and mathematical abstractions”. If physicists and philosophers don’t view such things as hypotheses, then they’re

“in danger of sliding back into the obscure subtlety of the Schoolmen, which for so many ages like some dread plague, has corrupted philosophy”.

Conclusion

So, in some cases at least, could it have been that (in both physics and philosophy) the things which had pragmatic (or instrumental) value were reified into metaphysical entities, conditions, properties and principles?


Notes:

(1) It may seem like an odd thing to say, but Berkeley’s philosophy of physics actually contains a lot of physics. This is unlike many 20th century examples of the philosophy of science which didn’t contain that much science.

(2) If taken literally as defining all one’s terms, then no philosophical work would ever get off the ground. (See my Define Your Terms and Explain Your Concepts’.)

(3) From a retrospective perspective, it does seem a little naïve. As if fixing our gaze on things alone will tell us the nature of such things. Yet this is still a superior approach to wild metaphysical speculation.