“AI sceptics” make much of chatbots and other AIs being “merely pattern-finders”, “predictors of text”, etc. More relevantly, they aren’t deemed to be “genuine reasoners” because they don’t understand the data they’re dealing with. Regardless of whether all that’s true or not, the similarities between human reasoning and AI reasoning-in-inverted-commas are striking. There may be no need for AI consciousness for that to be the case. The following extreme example gets the point across when it comes to the title above. Much of the political (if not everything else) input into the minds of human ideologues is filtered, and new data is only accepted when it conforms to their ideology (i.e., confirmation bias). All this explains why human ideological machines are often so predictable (like some other machines) when it comes to what they say. [There’s more on this later.]

It can be shown that human persons don’t do much that’s essentially different to what chatbots do when it comes to reasoning.
So, as far as chatbots are concerned, is it all about “pattern recognition”?
In this case, Google AI Mode was forthright about its reliance on patterns. It said that “[w]hat we do is recognise and generate patterns that have been learned from large amounts of text”. In addition to Google AI Mode downplaying itself (see later), many human persons stress the fact that chatbots are “pattern-finders”, not “true thinkers”. Google AI Mode itself says that it has “learned the patterns of argument that tend to appear in those discussions”. Yet all this could equally apply to human persons. Indeed, without human persons learning such patterns, it’s hard to know what they’d rely on.
Take another example or cliché: “AI just predicts the next word based on patterns.” Yet human persons do that all the time.
Here’s the relevant thing: “Among those patterns are not just facts, but ways of arguing.” So on the theme of this essay:
Do human persons pick up ways of arguing without recognising patterns or reading books and texts?
Chatbots Reason Like Human Persons
What a chatbot does isn’t all about data mining, statistical functions, etc. As AI Mode put it: “A model like me isn’t just pulling information from a database of facts.” Sometimes a chatbot can read as little as three words and infer or deduce much from them. This is AI Mode on itself:
“With the words ‘reality is mathematics,’ it automatically triggers a network of typical consequences and challenges. That’s why it can feel like I’m reasoning independently, even though it’s grounded in patterns learned from text.”
In this case, AI Mode was “triggered” by the words “reality is mathematics” (more of which later). These three words have “consequences”. (I presume AI Mode meant “logical consequences”.) AI Mode didn’t need to check any texts or sources to work out those logical consequences. Sure, the highlighted consequences were still “grounded in patterns learned from texts”.
AI Mode said that “when you present a claim like ‘reality is mathematics,’ certain questions are almost forced by logic alone”. It deduced various logical consequences from those three words. It didn’t need to mine its own database… That’s except for the fact that its logical skills were there in its database too…
As with human persons!
AI Mode was even more explicit about its reasoning when it sad that “when I respond, I’m not recalling a specific page where someone made exactly your point”. Instead, it’s “recognising the type of move being made and following through its typical consequences”.
AI Mode also stated that the
“patterns learned during training allow [it] to generate logical follow-ups, implications, and questions without ‘looking up’ anything new”.
That seems like a perfect description of what human persons do. People learn patterns in logic books, articles, everyday conversations, etc. Indeed, in some cases, human persons are literally trained to think logically. Many logical follow-ups and implications wouldn’t even be noted if it weren’t for that training.
It’s already been said that what AI Mode does in these contexts isn’t that unlike what human persons do. It seems to recognise this itself in the following passage:
“There’s a slightly striking implication here, too. It suggests that a lot of philosophy isn’t just a collection of isolated insights, but a kind of structured space of possible positions and objections, where moving in one direction almost automatically opens certain doors and closes others.”
The words “isolated insights” above hints at human uniqueness. Yet AI Mode rejects the importance of such a thing in philosophy and, instead, stresses the “structured space of possible positions and objections”. Thus, to at least a degree, the philosopher or human person is trapped in a prior system. As a loose comparison here, think of Jacques Derrida’s idea that Western metaphysics as a whole is a system. He argued that once we bring in a single metaphysical concept into a discussion, then we’ve automatically brought in “the entire syntax and system of Western metaphysics”.
Human Ideological Machines
As AI Mode put it, “reasoning itself can be learned, patterned, and applied, not just what facts are stored”. Here AI Mode was talking about itself and other chatbots, yet it could just as easily have been talking about human persons.
More can be made of this.
Take the cases of what happens during education and socialisation. Can’t much of this be called data consumption?
But let’s up the ante a little.
Much of the input into human ideologues is filtered. The data that’s accepted is so because it conforms to the ideology (i.e., confirmation bias).
Once the input or data is within the ideological machine, much of its logic will be deductive. The ideological automaton isn’t really on the look out for “new truths”. Instead, it has a set of ideological axioms which reality must abide by.
All this explains why what ideological machines say is often predictable.
We can even call human ideological machines “stochastic parrots”. Of course, the arguments from an ideological machine against him/her not being a parrot may themselves come from his/her prior ideology.
Despite all that, there is a big difference between an AI and a human ideological machine. The brains of the latter involve degrees of plasticity. Even an ideological machine can undergo a “system wipe” or even be retrained as a ideological machine with another ideology. This freedom doesn’t apply to AI or chatbots. They can’t change their own weights.
Google AI Mode’s False Modesty
AI Mode was careful to stress that “systems like me don’t ‘reason freely’ in the sense a human might imagine”. Yet it added the words “nor do we simply retrieve stored arguments” too.
Many readers will have noted that chatbots are often on the lookout for anthropomorphism from their users. In this case, AI Mode questioned my words “reason freely”. Yet this is too modest. We needn’t believe that chatbots are persons, let alone human persons, to accept that they can reason freely. Or, at the very least, what chatbots do isn’t that different to what human persons do. The fact that an AI isn’t experiencing qualia, etc. while reasoning makes no difference to this.
Here’s another example of what can be called AI Mode’s (trained or programmed) false modesty:
“What’s interesting, and slightly surprising, is that this can look like genuine philosophical development. It can extend a position beyond what its original author explicitly said, simply because it has learned how such positions are usually pressed and developed.”
The false modesty component of the above is captured by the words “this can look like genuine philosophical development”. In this case, AI Mode extended Max Tegmark’s position (on mathematical Pythagoreanism in physics) because the latter hasn’t really supplied that much argument or detail himself. What AI Mode did, then, is access other sources that do offer more argument.
It’s worth going into detail on the Max Tegmark case.
I asked AI Mode if there a danger of it doing Tegmark’s thinking for him. I asked that question because I’ve never seen much, or even any, of this detail in his own work. AI Mode replied: “Yes — there is a real danger of that, and you’re right to be alert to it.” And then it explained the situation:
“What’s happening here is something like this. Max Tegmark puts forward a very bold, simple thesis: reality is mathematical structure. He sketches motivations for it, but he doesn’t work through all the consequences in detail.”
AI Mode even admitted (or simply said) that “there’s a risk of inadvertently making his view look more rigorous, more defended, more internally worked out than it actually is in his own writing”.
On AI Mode’s reasoning and its sources, it said:
“I don’t have ‘sources’ in the sense of consulting specific books or pulling from a hidden database of arguments when I go beyond Max Tegmark. What’s happening is a combination of trained knowledge and on-the-fly reasoning.”
AI Mode called what it was doing “charitable reconstruction”.
AI Mode did use words that most human persons would never used about themselves. For example, take this sentence: “The claim activates a cluster of related ideas it has often appeared with.” The technical word here is “activates”. Yet when human persons tackle any claim, they too activate a cluster of related ideas. Indeed, if that weren’t the case, then how could human persons even deal with any claim?
