This essay is a response to, and a commentary upon, Dieter Birnbacher’s paper ‘Artificial Consciousness’, which can also be found in the book Conscious Experience (edited by Thomas Metzinger). In that paper Birnbacher make a useful distinction between what he calls “states of consciousness” and “mental acts”. The word “useful” has been used because I’m not entirely convinced by his position. Or, more accurately, I’m not entirely convinced by Birnbacher’s stipulative position on the words “mental acts” and “consciousness”.
Dieter Birnbacher offers his readers an analysis which shows that we can interpret a non-biological entity in entirely behavioural terms and still class it as displaying consciousness. However, that’s simply because he defines the word “consciousness” in two distinct ways: “states of consciousness” and “acts of consciousness”. (Birnbacher usually writes “mental acts”.)

In the following passage, Birnbacher makes this distinction:
“Whereas the statement N is in pain, feels comfortable or is depressed implies that N experiences certain states of consciousness, it is doubtful whether the statement N thinks, means, intends, calculates, perceives, concludes or understands implies in an analogous fashion that N performs a corresponding act of consciousness. In many cases, these concepts can be ascribed even if there are no corresponding acts of consciousness.”
The controversial part of the above is the following:
“[I]t is doubtful whether the statement N thinks, means, intends, calculates, perceives, concludes or understands implies in an analogous fashion that N performs a corresponding act of consciousness.”
Of course, Birnbacher’s examples of pain and feeling are often deemed to be subjective states, rather than mental acts.
Birnbacher’s distinction can now be applied to the subject of this essay: artificial consciousness. He writes:
“Unlike the concepts of sensory and affective states, the concepts of mental acts can be interpreted in a way in which they are applicable to entities which (apart from their external performance) are no proper objects for the ascription of states of consciousness.”
So computers (or artificial entities) can act as if they’re carrying out mental acts simply because they are, indeed, carrying out… acts. And acts are observable. Thus, these physical acts fall under the category of “external performance”. States of consciousness, on the other hand, are… states, and are not as easy to observe — at least not if they’re “internal”.
“Sensory and affective states” cannot be observed as such. However, they can still be behaviourally expressed. That said, it is these subjective states which are behaviourally expressed. So it’s not the case that the subjective states themselves actually are the behavioural expressions. In other words, behavioural expressions are acts, not states.
Despite all that, Birnbacher acknowledges that
“[c]oncepts of mental acts are ambiguous in a twofold way: they have, first, a ‘consciousness-sense’ in which they are applicable only to beings capable of consciousness”.
So mental acts too can be deemed to come along with consciousness. Birnbacher’s argument is that this needn’t be the case. In other words, many people believe that if an entity (to use two of Birnbacher’s own examples) calculates and concludes, then it must also be conscious. Birnbacher believes that such acts needn’t come along with consciousness. Indeed, computers do actually calculate and conclude. Yet Birnbacher also believes that computers, etc. “literally” think, calculate and understand. So, counterintuitively and in Birnbacher’s view, thinking, calculating and understanding needn’t come along with consciousness.
Birnbacher also expresses all the above in the following way:
“[Concepts of mental acts] have, secondly, an ‘achievement-sense’ in which they can be applied also to non-conscious entities such as computers and scanners, provided they show the relevant outward performance.”
Birnbacher then draws out a radical conclusion to his argument:
“In this latter sense, one can, in a completely unmetaphorical way, say of a computer that it *thinks, calculates or understands*, or of a scanner that it *perceives* certain things, without thereby suggesting that these entities are in any way conscious. Sentences such as ‘The chess computer means *this* pawn, not the pawn in front of your king.’ or ‘This scanner does not adequately see the diacritical signs,’ can be *literally* true.”
It’s easy to see why it’s the case that if thinking, calculating, understanding and perceiving are uncoupled from consciousness, then these words needn’t be taken to be metaphorical. Yet, arguably, this is simply a case of stipulation on Birnbacher’s part.
So what work do the words “literally true” do when thinking, understanding, calculating and perceiving are uncoupled from consciousness? This isn’t to argue that there can’t or shouldn't be such an uncoupling. It’s just that the these claims can only be “literally true” if one already accepts Birnbacher’s prior stipulations.
To get back to mental acts again.
In Birnbacher’s view, mental acts needn’t come along with consciousness. What’s more, they needn’t come along with subjectivity, and all the other characteristics many people usually associate with states of subjectivity. Birnbacher certainly makes this case in the following passage:
“What is irreducibly subjective, however, is the affective quality, the specific ‘colour’ of the emotion, including its hedonic tone and its felt intensity and depth. In virtue of this qualitative component emotions are more than the sum of cognition, excitement and appetition or aversion.”
Wittgenstein and Behaviourism
Dieter Birnbacher traces this behaviourist tradition (i.e., when stressing what he calls “mental acts”) to Ludwig Wittgenstein. In Birnbacher’s own words:
“Wittgenstein stresses the primacy of behavioural, especially expressive, criteria for inner states over against neurological indicators. Some of the effects of inner states — their expressions in behaviour — is given the status of criteria, whereas their neural causes are only assigned the status of symptoms.”
In the literature, sometimes it’s hard to fathom if Wittgenstein actually factored out “inner states” completely or simply stressed the point that behavioural expressions (qua “criteria”) are our only access to other people’s inner states. (In effect, inner states become beetles in boxes in Wittgenstein’s scheme.)
Wittgenstein’s behaviourism is also expressed by Birnbacher in the following way:
“[F]or Wittgenstein, certain behavioural criteria are necessary conditions, not of the occurrence of conscious states and acts in others, but of the *ascription* of such states and acts to others.”
This passage chimes in with my former point that it wasn’t that Wittgenstein denied that the “inner states” of other people existed, but that all we have access to is their verbal and physical behaviour. Indeed, Birnbacher also makes a Wittgensteinian point when he says that “the concept of pain is not only characterised by what pains are by themselves but also ‘by its particular function in our life’”.
The arguments against (pure) behaviourism (or inner-state eliminativism) are well-rehearsed. Birnbacher offers his own argument here:
“Bernard Rollin has drawn attention to the fact that the habit of cows who have been operated on to eat immediately after surgery must not be interpreted as proof that they feel no postoperative pain. There are rather good evolutionary reasons for the cow not to show typical pain behaviour though being in pain: The cow depends much more on regular feeding than humans (she would be considerably weakened by not eating), and she would be recognisable to predators by not grazing with the herd. If we want to know whether a cow hurts or not, an EEG is in any case the better criterion [ ].”
In this case at least, behaviourism is a bad position to uphold when it comes to animal rights. Indeed, it’s also a bad position to uphold when it comes to human rights in that any person who does not (or cannot) express his pain in behaviour is deemed not to be in pain. Thus, a person suffering from locked-in-syndrome is deemed not to be suffering from pain, as is a “tough guy” who never actually expresses his pain.
None of these conclusions go against Wittgenstein’s position if his only point was that behavioural expressions are our only means of accessing other people’s pain. (Although Birnbacher cites the case of an EEG.) However, it is an argument against old-style behaviourism.
Now let’s take the reverse situation: entities that display pain-behaviour but which don’t experience pain.
Birnbacher is explicit about his anti-behaviourist take on what is, and what isn’t, a conscious entity. For example, he says that
“i]t would be unreasonable to ascribe pain, say, to a machine, only because it shows pain behaviour in reaction to hurting stimuli”.
[The Turing Test can be rewritten in Birnbacher’s way: It would be unreasonable to ascribe understanding to a machine only because it shows understanding behaviour when it answers questions.]
Similarly, Birnbacher says that
“[a] machine uttering ‘I’ propositions is not thereby entitled to being treated as a self-conscious being”.
I’ve been using the term “behaviourist” to characterise the positions above. Birnbacher himself categorises them as verificationist. More accurately, he tells us that
“[t]he verification procedure proposed by Michael Scriven for consciousness in machine — the robot intelligently answering all sorts of question about its conscious life — is fundamentally mistaken”.
To bring Wittgenstein’s position up to date, we can say that a “verificationist” like Daniel Dennett rejected the “what pains are by themselves” part of this equation. He did so because he deemed that locution to be pointless (or without any content).
Philosophical Zombies
A (philosophical) zombie (or what Birnbacher calls an “imitation man”) displays all the behaviour of a conscious human being. Birnbacher admits that “a world of imitation men, of zombies without consciousness, is a logical possibility”. However, Birnbacher then qualifies his position by bringing in his distinction between mental acts and conscious states again. He writes:
“It is true, an *imitation man* would not be able to *feel* anything, but he could well be able to *mean* something, to have *thoughts* or *expectations* — exactly in the sense in which we can apply concepts of mental acts to purely material structures given that they exhibit the relevant complex behaviour. An *imitation man* could even be said to be able to think *itself*, without crediting it with self-consciousness in a sense which presupposes consciousness.”
Again, some — or even many — readers will be surprised that Birnbacher feels that he can uncouple thinking, expecting and meaning something from consciousness. However, and as already stated, that’s because he has stipulated that mental acts (i.e., qua certain behaviours) needn’t come along with consciousness. Yet, to repeat, using the term “mental” (as in “mental acts”) will still seem odd to some — or even many — readers.
No comments:
Post a Comment