Monday, 23 April 2018

Daniel Dennett's Chinese Room




The following is a critical account of the 'The Chinese Room' chapter in Daniel Dennett's book, Intuition Pumps and Other Tools For Thinking.

*******************************

In this chapter Daniel Dennett doesn't really offer many of his own arguments against John Searle's position. What he does offer are a lot of blatant ad hominems and simple endorsements of other people's (i.e., AI aficionados) positions on the Chinese Room argument.

Indeed Dennett is explicit about his support (even if it's somewhat qualified) of the “Systems Reply”:

At the highest level, the comprehending powers of the system are not unimaginable, we even get an insight into just how the system comes to understand what it does. They system's reply no longer looks embarrassing; it looks obviously correct.”

Dennett concludes:

... Searle's thought experiment doesn't succeed in what it claims to accomplish: demonstrating the flat-out impossibility of Strong AI.”

We can happily accept that Searle's thought experiment doesn't entirely (or even at all) succeed in what it claims to accomplish. However, Dennett's claims (or those he endorses) don't demonstrate the possibility of Strong AI either. In addition, it can also be said that the Searle himself never claimed the “flat-out impossibility of Strong AI” in the first place... Though that's another issue entirely.

The Systems Reply

It seems fairly clear that Dennett accepts the “Systems Reply (Berkeley)” argument against Searle's position. This is odd really because the Systems Reply is Searle's own take on what he thought the Opposition believed to be wrong with his own argument. (At least as it was first stated in the early days.) In other words, these aren't the actual words of any of the Opposition.

This is how Dennett himself quotes Searle in full:

“... 'While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of the whole system, and the system does understand the story.'...”

So what is that “whole system”? This:

“... 'The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has a 'data bank' of sets of Chinese symbols.'...”

I suppose it would be pretty obvious that if Searle put himself in a “system” (even if he had a large ledger of written rules, paper and pencils for doing calculations and data banks of Chinese symbols), it would still be Searle himself who'd be making use of all these elements of that system. Thus, in that sense, the original problem seems to be replicated. That is,

If Searle didn't originally understand Chinese

then

Searle + a large ledger, data banks, etc. wouldn't understand Chinese either.

That's because it is Searle himself, after all, who's making use of - and attempting to understand - these separate parts of the system. And even when the parts are taken together, it's still Searle who's taking them together and Searle who's doing the understanding. Thus the system doesn't seem to add anything other than a set of tools and data banks which Searle himself makes use of.

If all that's correct, then it's understandable that Searle-outside-the-room (i.e., Searle qua philosopher) should have a problem with this conclusion. So here's Dennett quoting Searle again:

“... 'Now, understanding is not being ascribed to the mere individual [Searle-in-the-room]; rather it is being ascribed to this whole system of which he is a part.'...”

To repeat. It's Searle-in-the-room who's making use of the whole system. Thus it's also Searle-in-the-room who's both using the system's parts and doing any understanding of its separate parts and the system taken as a whole.

Dennett's Examples

As stated, the Systems Reply simply seems to replicate the original problem - except for the addition of extra parts in order to create a system. Nonetheless, Dennett does indeed appear to believe that the addition of extra parts is of importance to this issue.

Firstly, instead of talking about Searle-in-the-room and the extra other things in that room, he now gives the example of a “register machine”.

Dennett says that 


“the register machine in conjunction with the software does perfect arithmetic”. 

So now we have this:

the register machine + software = a system capable of “perfect arithmetic”

Of course that's just like the following:

Searle-in-the-room + data banks + etc. =  a system capable of understanding Chinese

And then Dennett offers another equivalent example:

the central processing unit (CPU) + chess programme = a system capable of “beating you at chess”

Since Dennett is a behaviourist and a verificationist, his position seems to simply bypass Searle's central argument. So what is Dennett's behaviourist and a verificationist position? This:

If 

Searle-in-the-room delivers correct answers in Chinese, the register machine does perfect arithmetic, and the computer beats someone at chess, 

then 

Searle-in-the-room, the register machine and the computer understand (respectively) Chinese, arithmetic and chess. 

That is, Searle-in-the-Room, the register machine and the computer behave in a way that a True Understander would behave. Thus, to Dennett, they must be True Understanders.

Indeed Dennett is explicit about his verificationist and behaviourist position when he mentions that ultimate behaviourist and verificationist test – the Turing test. (Of course Dennett doesn't – as far as I know - call himself a “behaviorist” or a “verificationist”.) Actually, as with the Systems Reply, Dennett  quotes Searle again (this time only in part). Dennett writes:

If the judge can't reliably tell the difference, the computer (programme) has passed the Turing test and would be declared not just to be intelligent, but to 'have a mind in exactly the sense that human beings have minds,' as Searle put it in 1988.” 

Now since Dennett doesn't argue against this account and description - or the conclusion - of the Turing test, then surely he must accept it. 

Dennett would very happily accept that the a computer which had passed the Turing test is “intelligent”. (Indeed I think that too; depending on definitions.) However, I don't believe that Dennett needs also to accept Searle's addition. That is, I don't believe that Dennett needs to believe that this particular computer 

“ha[s] a mind in exactly the sense that human being have minds”.

Firstly, this particular computer might have passed an extremely rudimentary test. Thus it couldn't possibly be said to “have a mind in exactly the sense that human beings have minds”. Perhaps it has a mind. However, how would we know that? And how could we also say that this computer has a mind that's "exactly" the same as all human minds or exactly the same any any particular human mind?

Secondly, surely Dennett would accept that there's more to human minds than merely answering questions. This may mean that the best that can be said is that this computer has a type of mind. Perhaps if this (or any) computer were more extensively tested (or if it accomplished different things other than answering questions), then this would take the computer towards having a mind which is very much like a human mind.

So this particular computer, after this particular test, can be said to have a kind of a mind; just not a mind that can be said to be the same as a human mind (i.e., in all respects).

However (as stated), perhaps Dennett's wouldn't see the point of my qualification. That is, after this particular computer had passed this particular test, then perhaps Dennett would indeed have said that it (to use Searle's words again) “has[s] a mind in exactly the same sense that human beings have minds”.

As before, whatever Dennett's exact position, he puts the Strong AI position on the Turing test without criticising or adding to it. Thus Dennett continues:

Passing the Turing test would be, in the eyes of many in the field, the vindication of Strong AI.”

So why is that? According to Dennett again:

Because, they [Strong AI people] thought (along with Turing), you can't have such a conversation without understanding it, so the success of the computer conversationalist would be proof of its understanding.”

Again, this is to judge this computer according to purely behaviourist logic. That is, if the computer answers the questions correctly, then that's literally all there is to it. It must also understand the questions. As for verificationism. All we have is the computer's behaviour to go on. There's nothing else to verify or to postulate.

Zombies/Zimbos

Dennett's behaviourist and verificationist position on this particular computer (as well as its Turing test) can also be seen as being analogous to those philosophers' zombies he also has a problem with.

Actually, Dennett calls such a zombie a “zimbo”. A zimbo is an entity which is physically, functionally and behaviourally exactly like us. However, a zimbo is still meant to be lacking a certain... something.

More relevantly, the zimbo can pass the Turing test too. (Or at least the specific Turing test which the aforesaid computer passed.) That is, the Turing test and you and I

can't distinguish between a zimbo and a 'really conscious' person, since whatever a conscious person can do, a zimbo can do exactly as well”.

So just as this computer doesn't need that extra something, neither does Dennett's zimbo. In both cases, all we have is the behaviour of the computer and this zimbo. And their behaviour tells us that they're both intelligent and indeed that both have a mind.

In fact Dennett seems to go one step further than that.

Dennett moves swiftly from the computer and the zimbo being intelligent (or having intelligence), to their both being “conscious”. In Dennett's own words:

[T]he claim that an entity that passes the Turing test is not just intelligent but conscious.”

As stated before, Dennett seems to be putting the Strong AI position. He also seems to be endorsing that position. And this appears to be the case because Dennett neither argues against this position nor does he really add to it.

*********************************

*) See my: 'Against Daniel Dennett's Heterophenomenology'.


"This piece is a critical account of the 'Heterophenomenology' chapter of Daniel Dennett's book, Intuition Pumps and Other Tools For Thinking."



No comments:

Post a Comment