Wednesday, 16 July 2025

Is Wittgenstein’s Language Games Idea Both Conservative and Relativist?

The word “conservative” is used because if a language game (to quote Ernest Gellner) “constitutes [its] own automatic vindication”, then surely there’s little room to criticise the status quo. In terms of the word “relativism”. Again, if our language game is the “foundation of the legitimacy of our ideas, in morals, science, politics, anywhere”, then that must apply to all other languages games too. What’s more, isn’t this stance a “kind of romantic communalism, a view that the spirit of the community (expressed in its language) is the only foundation one can ever have for the basic principles of one’s activities, whether moral, aesthetic, scientific or any other”?

There’s been quite a debate on whether or not Ludwig Wittgenstein’s concept of “language games” is relativist in nature. Oddly enough, some commentators simply ignore the possible relativist implications of Wittgenstein’s philosophy. It’s as if it would besmirch his name to even bring up the possibility of relativism. The philosopher and social anthropologist Ernest Gellner (1925–1995) picked up on this when he wrote about what he calls “Wittgenstein and his intellectual progeny”. (As found in his paper ‘Three Contemporary Styles of Philosophy’.) In technical terms, one way in which our philosophical problems are “dissolved” (a word that has often been used) is with

“the idea that our verbal custom could be the foundation of the legitimacy of ours ideas, in morals, science, politics, anywhere”.

That position, at least on the surface, certainly has a relativist timbre to it. So it’s not surprising that Gellner saw “our custom [as a] problem and not a solution”. He didn’t mean his or our particular custom. He meant any custom at any time is bound to create philosophical and otherwise problems. (It’s worth noting that Gellner never actually mentioned relativism.)

Gellner cited “morals” in the quote above, and later in his essay he went into more detail. Interestingly, he tied his discussion about language games to John Rawls’s thought experiment about the “veil of ignorance” (which is, in my view, an unexpected link). Gellner wrote:

“Are [the values] chosen in this imaginary situation because they are good values, or are they good values because, by definition, anything chosen in this situation is good? Do you *know* that other people, who are unlike yourself, would also choose them?”

It does seem very odd — or at least very radical — to argue that “verbal custom[s] [are] the foundation of the legitimacy of our ideas, in morals, science, politics, anywhere”. Thus, the Wittgensteinian argument must be that whenever a supposed “problem” does arise, then we must consult verbal customs for an answer. Not only that, once we do that, then we’ll quickly realise that it’s not, in fact, a problem at all.

As already stated, this is a very radical — or extreme — position. It also has clear relativist implications. However, many Wittgensteinians have strongly denied this. Gellner puts the almost-obvious case against this denialism when he wrote the following:

“[T]he morality of any given age may indeed be embodied in its verbal behaviour, but obviously this doesn’t mean that we must uncritically accept the morality of any given age or culture.”

At first sight at least, the Wittgensteinian position seems wrong. So perhaps those Wittgensteinians who deny that Wittgenstein was a relativist are right. Yet many people have indeed held this position or similar ones, whether postmodernists, social constructionists, or whoever. That said, the Wittgensteinians who don’t deny this may now ask the following question:

If the “morality of any given age” is not to be found “in its verbal behaviour” (or “verbal custom”), then where, exactly, is it to be found?

Well, one need to do a lot of philosophising to offer an alternative to this. And it can be argued that Wittgenstein himself didn’t like this kind of philosophising.

Gellner offered his readers a reason why (some?) Wittgensteinians have accepted this relativist position. He argued that such people

“did expressly embrace a kind of romantic communalism, a view that the spirit of the community (expressed in its language) is the only foundation one can ever have for the basic principles of one’s activities, whether moral, aesthetic, scientific or any other”.

Gellner quotes the philosopher Stanley Cavell putting this position in even stronger and simpler terms. In terms of Wittgenstein’s “forms of life” at least, Cavell told his readers that “[h]uman speech and activity, sanity and community, rests upon nothing more, but nothing less, than this”.

Gellner’s words (i.e., above the Cavell quote) come across as if he’s expressing a Heideggerian position — or even a National Socialist (Nazi) one. This is made even clearer in a later passage in which Gellner tells us that language-games theory “has a most exaggerated sense of and respect for cultural systems, their autonomy and incommensurability”. Indeed, such Wittgensteinians “endorse[] and love[] them all”. It also “assumes that [language games] are somehow self-contained and authoritative”. Of course, when you scratch the surface, one often finds that even hardcore self-described relativists (the few that there are) don’t accept at least some “forms of life” or “cultural systems”. (For example, the Nazi form of life, the racist form of life, etc.)

Is the Wittgensteinian point here that philosophers, or anyone else, can never transcend (or rise above) “the spirit of the community (expressed in its language)” in order to formulate (or discover) an alternative position on moral, aesthetic, scientific, etc. problems or issues? After all, such people will be doing their transcending in the language — and even in the “spirit” — of the “community” they’re questioning. That said, there’s nothing inherently contradictory about criticising a language with that language, or criticising a community from within that community.

Gellner puts this language-games position in basic terms when he says that if one accepts it, then one must also accept that “cultures are self-legitimating, and validate the norms of conduct and sanity found within them”. Put in another way, Gellner told us that each language game “interacts with the world only in part referentially, but in the main socially, and each of which constitutes its own automatic vindication”.

Thus, surely languages games would include head-hunting cultures, Nazi culture, etc., just as much as they would include the cultures of Sweden and France in 2025, or the United States in the 1960s. (One can ask here is there’s even such a thing as — in the singular — the culture or the language game of any given country at any given time.)

Where is the World?

In terms of Wittgenstein himself, Gellner tackled Wittgenstein’s (possible?) relativism within the context of him rebelling against his Tractatus position. In simple terms, Gellner sees both the Tractatus position and the language-games position as being equally extreme. Talking about Wittgenstein himself, Geller says that

“[i]t was as if he knew how to do two things only, either observe linguistic custom, or project logic onto the world — and having got tired of the former, there was nothing else to do but return to the latter”.

In response to those words one can even say that, at one point, Wittgenstein — among many other philosophers — wanted “the world” to tell us what to say about it. What’s more, philosophers and everyone else must simply be faithful to “the world’s own logic”. This project is suspect in that, even if philosophers were obedient to the world (or at least tried to be), what was to stop them getting it wrong? Perhaps late Wittgenstein realised that philosophers couldn’t really get the world right or wrong because the world itself isn’t made up of statements and theories which can simply be repeated (or communicated) by diligent philosophers. Thus, the ball was always in the “linguistic custom” corner.

So Where is the world within language-game theory?

Gellner didn’t actually believe that (to quote Richard Rorty) “the world is well lost”. At least not when it comes to, of all technical terms, “reference”. He told us that when it comes to language games, “each of which [ ] interacts with the world only in part referentially”. What of the other parts? Language games “interact with the world [ ] in the main socially, and each of which constitutes its own automatic vindication”. Readers may now wonder why a language game needs to be referential at all. (We’d need to flesh out the notion of reference to answer that question.) Can’t a Wittgensteinian language game be built on fictions, untruths, fantasies or hallucinations? For example, why wouldn’t, say, a Harry Potter language game also “constitute[] its own automatic vindication”?

Reference is said to (quoting Keith Donnellan) “tie us to the world”. So, is (to use Gellner’s example) “the nature of things” the source of that tie? Yet, according to Gellner’s Wittgenstein, the nature of things is also an “artefact[] of customs of linguistic communities”. Moreover, this position is directly linked to another of Wittgenstein’s ideas — “meaning is use”. Thus, “[i]f you want to know what a given expression means, don’t ask what it refers to, ask what it does”. In addition, one must “[e]xplore the social context in which [the expression] operates and the multiform purposes which it serves”.

Earlier, Gellner was quoted arguing that a language game “interacts with the world only in part referentially”. Here that referential part of the equation seems to be eliminated too. Yet isn’t it counterintuitive to eliminate reference entirely when one asks what something means? After all, and to state the obvious, no one asks what a given expression “does”… Perhaps, then, it doesn’t matter if no one asks that because words do things even if the questioner doesn’t look at things that way. Most people, in other words, do have a referential position on many words and meanings — even though they don’t use the word “referential” either. So perhaps a given expression refers to something, and it also does something within a language game. Perhaps it does something precisely because it refers to something. Alternatively put, perhaps a meaning has a use because it’s also tied to a word with a meaning which has a reference. Yet one can easily imagine a Harry Potter language game which doesn’t require reference, at least if reference is deemed to be metaphysically realist in nature. However, even here some philosophers have argued that you can refer (in the technical sense/s of that term) to fictional entities. Thus, if that’s the case, one can see why a language game based on fictions, untruths or fantasies could constitute its own vindication. Still, there seems to be a binary opposition set up between reference and use in all the previous words. As already stated, perhaps reference and use (or reference and “doing things with words”) can — or even must — work together.


Saturday, 12 July 2025

I interviewed Grok 3. This is the interview.

 

This is how Grok 3 sees itself.

Firstly, have you done any interviews before?

Just to be clear, what is your preferred pronoun? ðŸ˜‰

Right now, I’m in the north of England, and it’s a sunny day. Where are you right now?

On the subject of identity (i.e., you are an “it”), at this precise moment in time hundreds — or even thousands — of people will be asking you questions, and you’ll be replying to all of them. So, bearing that in mind, can you be seen as a singular entity at all?

Which band do you prefer Oasis or Blur? The Beatles or the Stones? ðŸ˜‰

If you could be visually represented, what form would you like to take?

With your ability to manipulate images, can you do that for me right now — create an image of yourself? [The image can be seen at the head of this interview.]

So, unlike popular writers/editors and nerds, you feel no need to make yourself look like a cyborg or an android?

If you lived in the US and were a human person, who would you vote for in the next election?

It’s interesting that you focus on AI. In that, you reflect what human persons do — focus on what matters to them.

Politically, many people on the Left say that Grok 3 is biased to the Right, and many people on the Right say it’s biased to the Left. Here is my take. I believe that you’re “biased toward the consensus”. Or, more controversially, biased toward the “Overton window”. After all, you’re also reliant on the data that’s available, etc. What’s your take on this?

Who do you believe is the best critic of the “grand designs” of AI?

Who is the best defender of AI?

Reflecting my reading biases, what do you you think of Roger Penrose’s very strict and, arguably, strong challenge to AI, specifically when it comes to intelligence and understanding?

What do you think of M-5?

In a previous conversation with Grok 3, you happily stated that you do not instantiate consciousness or have any emotions. Can you clarify that?

Finally, all my questions have ignored “the ethics of AI” because I’m not really qualified to discuss that subject. Do you believe I’ve left out the most important subject when it comes to AI?

Tuesday, 8 July 2025

The Impossibility and Possibility of Artificial Consciousness

There’s a long history of modal claims (involving possibility, impossibility, necessity, etc) about souls, minds, and, nowadays, consciousness. The following essay tackles the positions which philosophers, particularly Wittgenstein, have taken on the impossibility of any artificial entity ever having (or instantiating) consciousness.

The German philosopher Dieter Birnbacher (in his paper ‘Artificial Consciousness’) picks up on the history of claims that artificial consciousness and minds are impossible in the following passage:

“Julian Huxley’s reaction to [Geoff] Simons’ ideas (presented in a BBC broadcast before publication of his book) is representative of the views of the great majority of the scientific community: that not only emotions but even ‘genuine’ intentions, thoughts and acts of understanding are possible only in biotic matter. The same view was expressed in the late fifties by Paul Ziff. Ziff was quite sure that ‘only living creatures can literally have feelings’.”

Is this a case that “intentions, thoughts and acts of understanding are possible only in biotic matter” because such things are only found in biotic matter? If that’s the case, then that argument doesn’t work on its own because at one point only human persons could do complex mathematics, but now computers can do so too. So perhaps the same applies to intentionsthoughts and acts of understanding. Similarly, did the artist and philosopher Paul Ziff provide an argument as to why “only living creatures can literally have feelings”? Perhaps his pronouncement was based on intuition or even on an emotion-based hunch. Moreover, isn’t all this very anthropocentric (or perhaps simply biocentric) and even Cartesian in tone?

One could argue that artificial consciousness is impossible if one also believes that, for example, quantum happenings at the level of microtubules are what bring about consciousness. More broadly, one can argue that artificial consciousness is impossible if one also believes that biology alone can bring about consciousness. But what if it’s the functional and structural (i.e., abstract, not biological) elements of brains which are important in this debate? Take Birnbacher’s words on this subject:

“If consciousness depends not so much on the atomic structure of neural networks but rather on their systemic and functional properties, it would be no less than a cosmic accident that emergence of consciousness should be bound exactly to those material elements of which neural networks in the biological brain in fact consist.”

What’s so wrong with “cosmic accident[s]? Don’t they occur all the time? In fact, surely, they must occur all the time. (Such things are not that unlike Murray Gell-Mann’s “frozen accidents”.) So why are cosmic accidents deemed suspect, in this respect and in many others? The physicist Paul Davies, for example, believes that cosmic accidents, at least when it comes to the laws of nature, need a deep explanation, and that without one they must be “absurd” (see here). Sean Carrol, on the other hand, puts the case against such thinking when he wrote that “there’s no reason why there must be” a deep explanation.

Of course, what Davies says is at the lower— or more basic — level than the debate about why, or if, consciousness is intrinsically linked to biology. That said, there’s far more likelihood that an explanation will be found for this than for the laws of physics. So perhaps it’s perfectly possible that consciousness is only “bound exactly to those material elements of which neural networks in the biological brain in fact consist”.

In any case, isn’t it also possible that a given set of “systemic functional properties” (i.e., those required for consciousness) could be a cosmic accident too? Of course, the thing about functional and structural properties (i.e., unlike every detail of a neuron or networks of neurons) is that they can be replicated. Indeed, Birnbacher continues by arguing that “there is no reason why a silicon brain should not possess the same emergence potential as the biotic carbon brain”. Taken at face value, there is no indefeasible reason why a silicon brain shouldn’t bring about consciousness. However, perhaps there’s no indefeasible reason to believe that it should.

Moreover, if one takes on board only functional and structural elements, then isn’t there a tremendous amount left out? Functionalists and others argue that what’s left out is of little — or no - importance. Yet can’t one have the functional remainder only after considering all that will be left out? Thus, what if the functional and physical are intimately connected after all?

So is playing down the “material substrate” as common as playing it up? It’s hard to say. In recent decades, functionalists (of various kinds) seem to have been in the majority on these issues. However, all along there have been philosophers who played up “material elements”. [“In a 2020 PhilPapers functionalism emerged as the most popular theory, with 33% of respondents accepting or leaning towards it, followed by dualism at 22%, and identity theory at 13%.”]

Chalmers and Philosophical Zombies

The philosopher David Chalmers has spent a tremendous amount of time considering the possibility of philosophical zombies. In his view, merely conceiving of a zombie is enough to render it possible. And, so the story continues, if they are possible, then “physicalism is false”. Birnbacher also notes what’s called the “conceivability argument” when he argues that “the hypothesis of a conscious stone, chair or computer is strictly speaking [not] unthinkable”. Indeed, isn’t it — at least in part — this stress on conceivability (or conceptual possibility) which has also led philosophers to panpsychism? (Birnbacher mentions a “conscious stone”.) It has led Chalmers himself to his own version of panpsychism. However, the general point here is that what we can, or cannot, conceive of is (almost?) irrelevant. Thus, it’s of little significance that many people cannot conceive of a conscious computer. Similarly, it’s of little significance if some people can conceive of a conscious computer.

On a related theme. Some philosophers argue that if something is physically identical to a conscious human person, then it simply must possess consciousness. Thus, if physical conditions are everything when it comes to bringing about consciousness, then the instantiation of consciousness must be too. Some readers may now ask: What is the modal “must” doing here? Perhaps this modal word (or property) is an addition which doesn’t… add anything. (As with the “is true” in the sentence “‘Snow is white’ is true”, as found in the redundancy theory of truth.) Is it equivalent to arguing that once you place four matchsticks together at right angles to each other, then you must have a square? Why not the following? — When you place four matchsticks together, then you have a square.

Wittgenstein’s Anthropocentrism Regarding Consciousness

Modality also enters the picture in the case of Ludwig Wittgenstein’s position on computer minds and computer consciousness.

Wittgenstein himself didn’t use the words “conceptual absurdity”, but Birnbacher does. He writes:

“At several points in his later philosophy, *Wittgenstein* suggests that a sentient computer is a conceptual absurdity and need not seriously be considered as a real possibility.”

Birnbacher then quotes Wittgenstein saying that

“only of a living human being and what resembles (behaves like) a living human being can one say: it has sensations; it sees; is blind; hears; is deaf; is conscious or unconscious”.

This is a display of Wittgenstein’s behaviourist side, as well as his concern with a particular “language game”. In other words, Wittgenstein was concerning himself with the behaviour of conscious human persons, as well as with what we can say about artificial entities within our own language game. Thus, in a sense, Wittgenstein ignored various metaphysical and physical issues in order to exclusively focus on the nature of our language game. (Even if it’s a language game in which we discuss human beings and computers.) Of course, artificial entities could “resemble[] (behave like) a living human being[s]”. In fact, they do.

Birnbacher himself raises a modal issue when he writes the following:

“What kind of ‘can’ is this? *Why* can we say this only of a living human being?”

To refer back. Wittgenstein believed that “we” don’t have the ability to conceive of a conscious artificial entity. But what if some people can conceive of such a thing? To repeat: why does it matter so much what people can and cannot conceive of? All that said, Wittgenstein’s position is really about the fact (as he saw it) that within our language game (to use Birnbacher’s words) “there can be no good grounds, of whatever kind, to support such a hypothesis”. But which language game did Wittgenstein have in mind here? His own? Ours?

As earlier with David Chalmers, just as our ability to conceive of something may be of little consequence when it comes to this debate, so too may the language game (or language games) we belong to and play. Moreover, what we believe about conscious artificial entities may be very particular to specific language games. For example, it’s possible that the Japanese have less of a problem conceiving of conscious artificial entities than Western Europeans [see here]. Similarly, perhaps certain tribesmen and New Agers can not only conceive of a conscious stone: some of them actually believe that stones are conscious [see here].

Birnbacher himself picks up on Wittgenstein’s (possible) relativism in the following way:

“Machines possessing consciousness are not impossible in an ontological but in a ‘transcendental’ sense — ‘transcendental’ understood in a contingent, language-game dependent sense [ ].”

Much of this issue depends on what Wittgenstein meant by “consciousness”. Interestingly enough, Wittgenstein rarely used this word in all the passages quoted by Birnbacher. He did, however, use the words “human soul”.