Tuesday, 8 July 2025

The Impossibility and Possibility of Artificial Consciousness

There’s a long history of modal claims (involving possibility, impossibility, necessity, etc) about souls, minds, and, nowadays, consciousness. The following essay tackles the positions which philosophers, particularly Wittgenstein, have taken on the impossibility of any artificial entity ever having (or instantiating) consciousness.

The German philosopher Dieter Birnbacher (in his paper ‘Artificial Consciousness’) picks up on the history of claims that artificial consciousness and minds are impossible in the following passage:

“Julian Huxley’s reaction to [Geoff] Simons’ ideas (presented in a BBC broadcast before publication of his book) is representative of the views of the great majority of the scientific community: that not only emotions but even ‘genuine’ intentions, thoughts and acts of understanding are possible only in biotic matter. The same view was expressed in the late fifties by Paul Ziff. Ziff was quite sure that ‘only living creatures can literally have feelings’.”

Is this a case that “intentions, thoughts and acts of understanding are possible only in biotic matter” because such things are only found in biotic matter? If that’s the case, then that argument doesn’t work on its own because at one point only human persons could do complex mathematics, but now computers can do so too. So perhaps the same applies to intentionsthoughts and acts of understanding. Similarly, did the artist and philosopher Paul Ziff provide an argument as to why “only living creatures can literally have feelings”? Perhaps his pronouncement was based on intuition or even on an emotion-based hunch. Moreover, isn’t all this very anthropocentric (or perhaps simply biocentric) and even Cartesian in tone?

One could argue that artificial consciousness is impossible if one also believes that, for example, quantum happenings at the level of microtubules are what bring about consciousness. More broadly, one can argue that artificial consciousness is impossible if one also believes that biology alone can bring about consciousness. But what if it’s the functional and structural (i.e., abstract, not biological) elements of brains which are important in this debate? Take Birnbacher’s words on this subject:

“If consciousness depends not so much on the atomic structure of neural networks but rather on their systemic and functional properties, it would be no less than a cosmic accident that emergence of consciousness should be bound exactly to those material elements of which neural networks in the biological brain in fact consist.”

What’s so wrong with “cosmic accident[s]? Don’t they occur all the time? In fact, surely, they must occur all the time. (Such things are not that unlike Murray Gell-Mann’s “frozen accidents”.) So why are cosmic accidents deemed suspect, in this respect and in many others? The physicist Paul Davies, for example, believes that cosmic accidents, at least when it comes to the laws of nature, need a deep explanation, and that without one they must be “absurd” (see here). Sean Carrol, on the other hand, puts the case against such thinking when he wrote that “there’s no reason why there must be” a deep explanation.

Of course, what Davies says is at the lower— or more basic — level than the debate about why, or if, consciousness is intrinsically linked to biology. That said, there’s far more likelihood that an explanation will be found for this than for the laws of physics. So perhaps it’s perfectly possible that consciousness is only “bound exactly to those material elements of which neural networks in the biological brain in fact consist”.

In any case, isn’t it also possible that a given set of “systemic functional properties” (i.e., those required for consciousness) could be a cosmic accident too? Of course, the thing about functional and structural properties (i.e., unlike every detail of a neuron or networks of neurons) is that they can be replicated. Indeed, Birnbacher continues by arguing that “there is no reason why a silicon brain should not possess the same emergence potential as the biotic carbon brain”. Taken at face value, there is no indefeasible reason why a silicon brain shouldn’t bring about consciousness. However, perhaps there’s no indefeasible reason to believe that it should.

Moreover, if one takes on board only functional and structural elements, then isn’t there a tremendous amount left out? Functionalists and others argue that what’s left out is of little — or no - importance. Yet can’t one have the functional remainder only after considering all that will be left out? Thus, what if the functional and physical are intimately connected after all?

So is playing down the “material substrate” as common as playing it up? It’s hard to say. In recent decades, functionalists (of various kinds) seem to have been in the majority on these issues. However, all along there have been philosophers who played up “material elements”. [“In a 2020 PhilPapers functionalism emerged as the most popular theory, with 33% of respondents accepting or leaning towards it, followed by dualism at 22%, and identity theory at 13%.”]

Chalmers and Philosophical Zombies

The philosopher David Chalmers has spent a tremendous amount of time considering the possibility of philosophical zombies. In his view, merely conceiving of a zombie is enough to render it possible. And, so the story continues, if they are possible, then “physicalism is false”. Birnbacher also notes what’s called the “conceivability argument” when he argues that “the hypothesis of a conscious stone, chair or computer is strictly speaking [not] unthinkable”. Indeed, isn’t it — at least in part — this stress on conceivability (or conceptual possibility) which has also led philosophers to panpsychism? (Birnbacher mentions a “conscious stone”.) It has led Chalmers himself to his own version of panpsychism. However, the general point here is that what we can, or cannot, conceive of is (almost?) irrelevant. Thus, it’s of little significance that many people cannot conceive of a conscious computer. Similarly, it’s of little significance if some people can conceive of a conscious computer.

On a related theme. Some philosophers argue that if something is physically identical to a conscious human person, then it simply must possess consciousness. Thus, if physical conditions are everything when it comes to bringing about consciousness, then the instantiation of consciousness must be too. Some readers may now ask: What is the modal “must” doing here? Perhaps this modal word (or property) is an addition which doesn’t… add anything. (As with the “is true” in the sentence “‘Snow is white’ is true”, as found in the redundancy theory of truth.) Is it equivalent to arguing that once you place four matchsticks together at right angles to each other, then you must have a square? Why not the following? — When you place four matchsticks together, then you have a square.

Wittgenstein’s Anthropocentrism Regarding Consciousness

Modality also enters the picture in the case of Ludwig Wittgenstein’s position on computer minds and computer consciousness.

Wittgenstein himself didn’t use the words “conceptual absurdity”, but Birnbacher does. He writes:

“At several points in his later philosophy, *Wittgenstein* suggests that a sentient computer is a conceptual absurdity and need not seriously be considered as a real possibility.”

Birnbacher then quotes Wittgenstein saying that

“only of a living human being and what resembles (behaves like) a living human being can one say: it has sensations; it sees; is blind; hears; is deaf; is conscious or unconscious”.

This is a display of Wittgenstein’s behaviourist side, as well as his concern with a particular “language game”. In other words, Wittgenstein was concerning himself with the behaviour of conscious human persons, as well as with what we can say about artificial entities within our own language game. Thus, in a sense, Wittgenstein ignored various metaphysical and physical issues in order to exclusively focus on the nature of our language game. (Even if it’s a language game in which we discuss human beings and computers.) Of course, artificial entities could “resemble[] (behave like) a living human being[s]”. In fact, they do.

Birnbacher himself raises a modal issue when he writes the following:

“What kind of ‘can’ is this? *Why* can we say this only of a living human being?”

To refer back. Wittgenstein believed that “we” don’t have the ability to conceive of a conscious artificial entity. But what if some people can conceive of such a thing? To repeat: why does it matter so much what people can and cannot conceive of? All that said, Wittgenstein’s position is really about the fact (as he saw it) that within our language game (to use Birnbacher’s words) “there can be no good grounds, of whatever kind, to support such a hypothesis”. But which language game did Wittgenstein have in mind here? His own? Ours?

As earlier with David Chalmers, just as our ability to conceive of something may be of little consequence when it comes to this debate, so too may the language game (or language games) we belong to and play. Moreover, what we believe about conscious artificial entities may be very particular to specific language games. For example, it’s possible that the Japanese have less of a problem conceiving of conscious artificial entities than Western Europeans [see here]. Similarly, perhaps certain tribesmen and New Agers can not only conceive of a conscious stone: some of them actually believe that stones are conscious [see here].

Birnbacher himself picks up on Wittgenstein’s (possible) relativism in the following way:

“Machines possessing consciousness are not impossible in an ontological but in a ‘transcendental’ sense — ‘transcendental’ understood in a contingent, language-game dependent sense [ ].”

Much of this issue depends on what Wittgenstein meant by “consciousness”. Interestingly enough, Wittgenstein rarely used this word in all the passages quoted by Birnbacher. He did, however, use the words “human soul”.


No comments:

Post a Comment