Friday 18 May 2018

Searle & Patricia Churchland on Dualism in AI and Cognitive Science




John Searle accuses many of those who accuse him of being a “dualist” of being... dualists. Bearing in mind the philosophical ideas discussed in the following, his stance isn't a surprise.

Searle's basic position on this is:

i) If Strong AI proponents, computationalists or functionalists, etc. ignore or play down the physical biology of brains; and, instead, focus exclusively on syntax, computations and functions (the form/role rather than the physical embodiment),
ii) then that will surely lead to some kind of dualism in which non-physical abstractions basically play the role of Descartes' non-physical and “non-extended” mind.

Or to use Searle's own words:

"If mental operations consist of computational operations on formal symbols, it follows that they have no interesting connection with the brain, and the only connection would be that the brain just happens to be one of the indefinitely many types of machines capable of instantiating the program. This form of dualism is not the traditional Cartesian variety that claims that there are two sorts of substances, but it is Cartesian in the sense that it insists that what is specifically mental about the brain has no intrinsic connection with the actual properties of the brain. This underlying dualism is masked from us by the fact that AI literature contains frequent fulminations against 'dualism'.”

Searle is noting the radical disjunction created between the actual physical reality of biological brains and how these philosophers and scientists explain and account for mind, consciousness and understanding.

So Searle doesn't believe that only biological brains can give rise to minds, consciousness and understanding. Searle's position is that, at present, only biological brains do give rise to minds, consciousness and understanding. Searle is emphasising an empirical fact; though he's not denying the logical and metaphysical possibility that other things can bring forth mind, consciousness and understanding.

Searle is arguing that the biological brain is played down or even ignored by those in AI, cognitive science generally and many in the philosophy of mind. And when put that bluntly, it seems like an almost perfect description of dualism. Or, at the very least, it seems like a stance (or position) which would help advance a (non-Cartesian) dualist philosophy of mind.

Yet because those people just referred to (who're involved in artificial intelligence, cognitive science generally and the philosophy of mind) aren't committed to what used to be called a “Cartesian ego” (they don't even mention it), then the charge of “dualism” seems – superficially! - to be unwarranted. However, someone can be a dualist without being a Cartesian dualist. Or, more accurately, someone can be a dualist without that someone positing some kind of non-material substance formerly known as the Cartesian ego. However, just as the Cartesian ego is non-material, non-extended (or non-spatial) and perhaps also abstract; so too are the computations and the (as Searle puts it) “computational operations on formal symbols” which are much loved by those involved in AI, cognitive science and whatnot.

Churchland on Functionalism as Dualism

Unlike Searle, Patricia Churchland doesn't actually use the word “dualist” for her opponents; though she does say the following:

Many philosophers who are materialists to the extent that they doubt the existence of soul-stuff nonetheless believe that psychology ought to be essentially autonomous from neuroscience, and that neuroscience will not contribute significantly to our understanding of perception, language use, thinking, problem solving, and (more generally) cognition.”

Put in Churchland's way, it seems like an extreme position. Basically, how could “materialists” (when it comes to the philosophy of mind and psychology) possibly ignore the brain? 

It's one thing to say that 

“psychology is distinct from neuroscience”. 

It's another thing to say that psychology is 

“autonomous from neuroscience” 

and that 

“neuroscience will not contribute significantly to our understanding” of cognition. 

Sure, the division of labour idea is a good thing. However, to see the “autonomous” in “autonomous science” as being about complete and total independence is surely a bad idea. In fact it's almost like a physicist stressing the independence of physics from mathematics.

Churchland thinks that biology matters. In this she has the support of many others. 

For example, the Nobel laureate Gerald Edelman says that the mind

can only be understood from a biological standpoint, not through physics or computer science or other approaches that ignore the structure of the brain”.

In addition, you perhaps wouldn't ordinarily see Patricia Churchland and John Searle as being bedfellows; though in this issue they are. So it's worth quoting a long passage from Searle which neatly sums up some of the problems with non-biological theories of mind. He writes:

I believe we are now at a point where we can address this problem as a biological problem [of consciousness] like any other. For decades research has been impeded by two mistaken views: first, that consciousness is just a special sort of computer program, a special software in the hardware of the brain; and second that consciousness was just a matter of information processing. The right sort of information processing -- or on some views any sort of information processing --- would be sufficient to guarantee consciousness..... it is important to remind ourselves how profoundly anti-biological these views are. On these views brains do not really matter. We just happen to be implemented in brains, but any hardware that could carry the program or process the information would do just as well. I believe, on the contrary, that understanding the nature of consciousness crucially requires understanding how brain processes cause and realize consciousness.. ” 

In a sense, then, if one says that biology matters, one is also saying that functions aren't everything (though not that functions are nothing). Indeed Churchland takes this position to its logical conclusion when she more or less argues that in order to build an artificial brain one would not only need to replicate its functions: one would also need to replicate everything physical about it.

Here again she has the backup of Searle. He writes:

Perhaps when we understand how brains do that, we can build conscious artifacts using some non-biological materials that duplicate, and not merely simulate, the causal powers that brains have. But first we need to understand how brains do it.”

Of course it can now be said that we can have an artificial mind without having an artificial brain. Nonetheless, isn't it precisely this position which many dispute (perhaps Churchland does too)?

In any case, to use Churchland's own words on this subject, she says that

it may be that if we had a complete cognitive neurobiology we would find that to build a computer with the same capacities as the human brain, we had to use as structural elements things that behaved very like neurons”.

Churchland continues by saying that

the artificial units would have to have both action potentials and graded potentials, and a full repertoire of synaptic modifiability, dendritic growth, and so forth”.

It gets even less promising for functionalism when Churchland says that

for all we know now, to mimic nervous plasticity efficiently, we might have to mimic very closely even certain subcellular structures”.

Put that way, Churchland makes it sound as if an artificial mind (if not artificial intelligence) is still a pipe-dream.

Readers may also have noted that Churchland was only talking about the biology of neurons, not the biology of the brain as a whole. However, wouldn't the replication of the brain (as a whole) make this whole artificial-mind endeavor even more complex and difficult?

In any case, Churchland sums up this immense problem by saying that

we simply do not know at what level of organisation one can assume that the physical implementation can vary but the capacities will remain the same”.

That's an argument which says that it's wrong to accept the implementation-function “binary opposition” (to use a phrase from Jacques Derrida) in the first place. Though that's not to say - and Churchland doesn't say - that it's wrong to concentrate on functions or cognition generally. It's just wrong to completely ignore the “physical implementation”. Or, as Churchland says at the beginning of one paper, it's wrong to “ignore neuroscience” and focus entirely on function.

Churchland puts the icing on the cake herself by stressing function. Or, more correctly, she stresses the functional levels which are often ignored by functionalists.

Take the cell or neuron. Churchland writes that

even at the level of cellular research, one can view the cell as being the functional unit with a certain input-output profile, as having a specifiable dynamics, and as having a structural implementation in certain proteins and other subcellular structures”.

Basically, what's being said here is that in many ways what happens at the macro level of the mind-brain (in terms of inputs and outputs) also has an analogue at the cellular level. In other words, functionalists are concentrating on the higher levels at the expense of the lower levels.

Another way of putting this is to say what Churchland herself argues: that neuroscientists aren't ignoring functions at all. They are, instead, tackling biological functions, rather than abstract cognitive functions.


Thursday 17 May 2018

Searle on Artificial Intelligence (AI) and the Brain's Causal Powers



John Searle's position on artificial consciousness and  artificial understanding is primarily based on what he calls the “causal powers” of the biological (human) brain.

This is Searle himself on this subject:

Some people suppose that I am claiming that it is in principle impossible for silicon chips to duplicate the causal powers of the brain. That is not my argument... It is a factual question, not to be settled on purely philosophical or a priori grounds, whether or not the causal powers of neurons can be duplicated in some other material, such a silicon chips, vacuum tubes, transistors, beer cans, or some quite unknown chemical substances. The point of my argument is that you cannot duplicate the causal powers of the brain solely in virtue of instantiating a computer program, because the computer program has to be defined purely formally.”

[This passage can be found in Searle's paper 'Minds and Brains Without Programs' – not to be confused with his well-known 'Minds, Brains and Programs'.)

It does seem quite incredible that a fair few of those involved in artificial intelligence (AI) and cognitive science generally completely downplay - or even ignore - brains and biology. This is especially the case when one bears in mind that biological brains are the only things - at present! - which display consciousness and experience. Nonetheless, when it comes to intelligence, it can be said that computers already do indeed display intelligence or even that they are intelligent. (Though this too is rejected by many people.)

So Searle simply believes that there has to be some kind of strong link between biology and mind, consciousness and understanding in the simple sense that - at this moment in time - only biological systems have minds, consciousness and understanding. (Of course the question “How do we know this?” can always be asked here.) Thus there's also a strong link between biological brains and complex (to use Searle's words) “causal powers”. However, this doesn't automatically mean that mind, consciousness and understanding must necessarily be tied to biology. It just means that, at this moment in time, it is so tied. And that tight and strong link between biology and consciousness, mind and understanding is itself linked to the requisite complex causal powers which are only instantiated - so far! - by complex biological brains.

Thus Searle's talk about causal powers refers to the argument that a certain level of complexity is required to bring about the causal powers which are required for mind, consciousness and understanding (as well as for “intentionality” and semantics, in Searle's writings).

To repeat. Searle never argues that biological brains are the only things capable - in principle - of bringing about minds, consciousness and understanding. He says that biological brains are the only things known which are complex enough to do so. That means that it really is all about the biological, physical and causal complexity of brains.

Causal Powers?

The problem here is to determine what exactly Searle means by the words “causal powers”. We also need to know about the precise relation between such causal powers and consciousness, understanding and, indeed, intelligence.

One hint seems to be that the brain's causal powers seem to be over and above what is computable and/or programmable. Alternatively, perhaps it's just an argument that, at the present moment of time (in terms of technology), these complex causal powers are not programmable or computable.

Indeed at the end of the quote above, Searle moves on from talking about causal powers to a hint at his Chinese Room argument/s. So to repeat that passage:

The point of my argument is that you cannot duplicate the causal powers of the brain solely in virtue of instantiating a computer program, because the computer program has to be defined purely formally.”

The argument here is that something physical is required in addition to an abstract “computer program” and the computations/algorithms/rules/etc contained within it. And that something physical also happens to be biological – i.e., the brain. In other words, computer programmes are “purely formal[]”. Brains, on the other hand, are both biological and physical. Thus even if programmes or computations capture the “form” or syntax (as it were) of a brain's computations and even of its physical structure/s, they still don't capture its biological physicality. That is, they don't replicate the brain's causal powers.

However, if programmes were instantiated in non-biological physical constructions, then we could - at least in principle - replicate (or capture) both the forms/syntax/computational nature of the biological brain and also its causal powers. It's just that no physical computer (which runs abstract programmes) at present does replicate (or capture) the complex causal powers of the biological brain.

Tuesday 8 May 2018

Comments on Dennett's *Intuition Pumps*: Zombies & Thought Experiments (2)



This is a short response to the 'Zombies and Zimboes' chapter of Daniel Dennett's book, Intuition Pumps and Other Tools for Thinking.


*************************

For Dennett, the main point of what he calls a “zimbo” is that there's no way of knowing if it instantiates  - or doesn't instantiate - experiences or consciousness. And if there's no way of knowing that (at least according to Dennett's behaviourist and verificationist logic), then why deny less to the zimbo than one would do so to a human being?

There's a problem here.

If Dennett's zimbo were literally identical - in every respect - to a human being, then how could we say that there's either something less - or something more - to it? And that's because we could - or would - never know that we were actually confronting a zimbo. However, this point, of course, is part of the point of the thought experiment... a thought experiment which is itself a response to other philosophers' thought experiments.

Yet Dennett doesn't like thought experiments in philosophy. Or at least he doesn't like many of them.

Thought Experiments

Many thought experiments can irritate people – even philosophers. Then again, thought experiments certainly serve some purpose.

Dennett has a big problem with David Chalmers and his (philosophical) zombies. Many other philosophers and laypersons do too.

It all stems from the philosophical move from conceivability to possibility (as with Descartes' Cogito). This is central to Chalmers' work on zombies, panpsychism and all sorts of other stuff. As stated, it tends to become a pain in the arse.

Nonetheless, thought experiments have certainly been very important in physics - or at least in theoretical physics. Then again, many historical thought experiments in physics later came to be backed up by experiments, predictions, tests, and/or observations. This isn't the case when it comes to (most/all?) philosophical thought experiments; which, almost by definition, can never be confirmed or dis-confirmed. That is, they seem to be designed to have no experimental or observational component. In any case, that's certainly true of zombie thought experiments.

In that respect, then, the well-known and ironic question


"How many angels can dance on the head of a pin?"


seems to be more acceptable and productive than some of these thought experiments. 





Tuesday 1 May 2018

Wittgenstein's Doubts About Doubt




In this piece Ludwig Wittgenstein is taken to be a “anti-philosopher”. More specifically, the following tackles Wittgenstein's position on philosophical doubt – or at least on what's often called “global scepticism” (or “universal scepticism”). (Other philosophers who've been classed as anti-philosophers include Nietzsche, Heidegger and Derrida.)

Like many of Wittgenstein's other positions, this is the Austrian philosopher's critique of a central tradition (dating back over two millennia) within Western philosophy.

Along with Wittgenstein's position on doubt, his position on language games will also be discussed. Indeed the two positions are tied together in various ways. The most important way doubt and language games can be tied together (at least within this context) is by seeing doubt itself as a (philosophical) language game. Oddly enough, Wittgenstein didn't seem to hold this position.

Throughout the following I'll also be bouncing off the words of Professor Sophie-Grace Chappell: a Professor of Philosophy at The Open University.

****************************

Ludwig Wittgenstein’s case against scepticism (or at least against global scepticism) is simple. We can't doubt anything without exempting certain others things from doubt. Thus the basic position is that even philosophical doubt requires non-doubt. That is, in order to get the game of doubt under way, certain things must be placed beyond doubt.

As Wittgenstein himself puts it (in On Certainty):

The questions that we raise and our doubts depend on the fact that some propositions are exempt from doubt, are as it were like hinges on which those [doubts] turn.

That is to say, it belongs to the logic of our scientific investigations that certain things are in deed not doubted…

My life consists in my being content to accept many things.”

To put all that at its simplest.

Say that you're doubting a friend's geological theory. You wouldn't thereby also doubt the very meanings of your friend's words. That would be semantic doubt; not geological doubt.

Similarly, you wouldn't doubt that your friend is a person rather than a zombie or robot. That would be a doubt about “other minds”; not a doubt about geology.

Even if your other doubts aren't philosophical, they still needn't be doubts about geology.

For example, you may doubt your friend's honesty or why he's saying what he's saying. (You may doubt that you put your underpants on.) Thus these other doubts may be "properly ignored" (as the philosopher David Lewis put it).

What's at the heart of these "exemptions" is the "context" in which the doubt takes place. As Wittgenstein (again) puts it:

Without that context, the doubt itself makes no sense: ‘The game of doubting itself presupposes certainty’; ‘A doubt without an end is not even a doubt.’”

If one doubts everything, then there's no sense in doubting anything. Doubt occurs in the context of non-doubt.

Even according to Descartes, the one thing you can't doubt is that you are doubting. And in terms of personal psychology, you need a context for your doubt/s.

The Things We Cannot Doubt

The important point to make about Wittgenstein’s position isn't that, as Professor Chappell puts it,

there is some special class of privileged propositions that we simply can’t doubt”.

Wittgenstein's position, in other words, isn’t Cartesian or "foundationalist". The propositions we mustn't doubt could be of (just about) any kind. The general point is that there must be some propositions (of whatever kind) which we mustn't doubt in order to get the ball rolling. We can't start ex nihilo - as Descartes ostensibly did. We must bounce off certain propositions which we don't (rather than can't) doubt.

What we choose not to doubt (indeed what we also choose to doubt) will depend on context. That context will determine the nature of our doubts. (Or, alternatively, our lack of doubt vis-à-vis particular propositions or possibilities.)

Chappell (again) gives some very basic non-philosophical examples of this. He writes:

“… in each context, there is a very great deal that is not in doubt: the existence of the chessboard, the reliability of the atlas, the possibility of generally getting shopping sums right. This background makes it possible to have doubts, and possible (in principle) to resolve them. Where there is no such background, says Wittgenstein, the doubt itself makes no sense.”

We can create a table of what we can't doubt; and what we can doubt:

1a) The existence of the chessboard.
1b) The sincerity of our chess opponent’s naivety.

2a) The (general) reliability of the atlas.
2b) Whether or not the atlas is up-to-date.

3a) The possibility of (generally) getting our shopping sums right.
3b) That one’s hangover (today) is affecting one’s arithmetical judgement.

To put all the above another way:

i) You couldn't doubt the sincerity of your chess opponent’s naivety if before that you actually doubted the existence of the chessboard.
ii) You wouldn't doubt whether or not your atlas was up-to-date if you'd already doubted its general reliability.
iii) You wouldn't doubt your own arithmetical skills during a hangover if you'd already doubted your skills in all contexts.

Not only that: you can only resolve your lesser doubts if you simply disregard the more global (or extreme) doubts which might have proceeded them. That is, you can go ahead and win your chess opponent only if you simply disregard the possibility of the chessboard simply not existing in the first place.

Wittgenstein also seems to say that total (or global) doubt simply “makes no sense”. That's because there needs to be a reason to doubt. If you doubt everything, then you can have no reason to doubt – unless the very act of doubting everything is itself the reason to doubt!

Descartes’ Fallacy?

Chappell then offers us a logical argument against Descartes’ global doubt. She argues that it rests on a fallacious argument. She writes:

Descartes – you could say – begins his philosophy by arguing that since any of our beliefs might be false, therefore all of our beliefs might be false. But this is a fallacious argument. (Compare: ‘Any of these strangers might be the Scarlet Pimpernel; therefore every one of these strangers might be the Scarlet Pimpernel.’) What is true of any belief is not necessarily true of every belief. So – the claim would be – Descartes’ system rests on a fallacy (the ‘any/all fallacy’, as it is sometimes called.)”

Prima facie, Chappell's argument does seem to follow. After all, she's not saying that all our beliefs are false if one is false. She's saying that all of them may be false if one is (found to be) false.

Then again, one belief (or “any” belief) being false doesn't entail every belief being false. Though it may leave open that possibility.

The analogy with the Scarlet Pimpernel doesn't work because, by definition, only one person can be the Scarlet Pimpernel. This may be a simple grammatical mistake in that Chappell uses the phrase “every one of these strangers might be the Scarlet Pimpernel”; whereas she should have said that “any one of these strangers might be the Scarlet Pimpernel”.

Perhaps there's nothing strange about saying that every (or all) our beliefs may be false - or even that they are all false. However, not all our beliefs are identical when it comes to their content (i.e., what they're about); though there can only be one other person who's identical with the Scarlet Pimpernel.

So saying that

any of these strangers might be the Scarlet Pimpernel; therefore every one of these strangers might be the Scarlet Pimpernel”

isn't the same as the Cartesian example at all. Two beliefs may both be false; though they needn't be identical beliefs. However, if there were two people who were the Scarlet Pimpernel, then they'd need to be identical – indeed numerically identical.

The Language Game of Scepticism

Wittgenstein brings in his notion of a language game to make sense of global doubt. Again, his argument against doubt is simple. That argument is that philosophical (or sceptical) doubts don't arise in any of our language games. Therefore Wittgenstein believed that we should simply ignore them. Chappell writes:

The trouble with crazy sceptical hypotheses, according to Wittgenstein, is that they don’t crop up in any of the various language games that make up the texture of ordinary life in the world. That is why it doesn’t make sense to discuss them.”

This is a repeat of the claim that “crazy sceptical hypotheses” don’t have any context. And if they have no context (outside philosophy!), then “it doesn’t make sense to discuss them”. However, the sceptic (or philosopher) may simply reply:

So what! I don’t care if scepticism has "no context" or if there's no sceptical "language game". What I'm saying may still be legitimate and even true! In any case, why can’t scepticism (or philosophy generally) itself be a language game?

After all, philosophy is indeed a language game (if we must use Wittgenstein's term) which has been played for over two thousand years. And scepticism itself has been an important and influential language game within philosophy - and indeed within Western culture generally. What better examples of a language game could you have?

Moreover, is it really true that scepticism only exists in the language game of philosophy? To take two extreme examples. What about the many conspiracy theories that are so much a part of our culture? And what about the intense scepticism which is directed against science and indeed against philosophy (e.g., Wittgenstein's own position!) itself ?

In addition, shouldn’t a Wittgensteinian say that the very fact that that “crazy sceptical hypotheses” have been discussed at all means that they must have been discussed in one (or in various) language games? Every discourse - crazy or sane - needs its own language game. Indeed isn’t that one of Wittgenstein’s main points about language games?

Despite saying all that, Chappell states that

the sceptic isn’t playing any legitimate language game in his discourse, and so is talking nonsense”.

Again, who says that the sceptic isn’t playing a language game? And who says that if the sceptic is indeed playing a language game, then his language game isn't "legitimate"? Is it because it's not the language game of the ordinary man speaking "ordinary language"? The sceptic may again say:

So what! Why should I care about ordinary language or the ordinary man?

So I’m not sure why - or how - Wittgenstein excluded scepticism from all language games or managed to deny that it's a legitimate language game.

Perhaps Wittgenstein might have replied:

But that’s where you're wrong! The sceptic’s discourse doesn't make sense. It's meaningless. It's meaningless precisely because it's not ordinary language. (It doesn't use accepted terms in the way that people use them in everyday life.) Therefore the sceptic’s discourse doesn't make sense. It's nonsense.

It's certainly true that sceptical “linguistic activity” does indeed have “its own rules”. Indeed it can hardly not do so. And because it does have its own rules, then it must also be a bona fide (Wittgensteinian) language game. However, it just happened to be a language game which Wittgenstein himself didn't like. (Just as William P. Alston – in his paper 'Yes, Virginia, There Is a Real World' - favours religious language games; though he doesn't like the language games of what he calls "relativism" or "scientism".) If we truly believe in Wittgensteinian language games, then we simply can't pick and choose which ones we accept and which ones we reject. If it's a “human linguistic activity with its own rules”, then it's also a language game. Indeed, according to Wittgenstein himself (if only implicitly), it's irrelevant if you or I agree or disagree with the other language games we don’t belong to. After all, all language games - almost by definition - are (at least partly) autonomous and thus beyond the criticisms of other language games.

Isn't all this the truly relativistic result of Wittgenstein's theory of languages games?

*******************************

*) This piece can also be found @ the New English Review as 'Wittgenstein's Doubts About Doubt'.



Monday 23 April 2018

Daniel Dennett's Chinese Room




The following is a critical account of the 'The Chinese Room' chapter in Daniel Dennett's book, Intuition Pumps and Other Tools For Thinking.

*******************************

In this chapter Daniel Dennett doesn't really offer many of his own arguments against John Searle's position. What he does offer are a lot of blatant ad hominems and simple endorsements of other people's (i.e., AI aficionados) positions on the Chinese Room argument.

Indeed Dennett is explicit about his support (even if it's somewhat qualified) of the “Systems Reply”:

At the highest level, the comprehending powers of the system are not unimaginable, we even get an insight into just how the system comes to understand what it does. They system's reply no longer looks embarrassing; it looks obviously correct.”

Dennett concludes:

... Searle's thought experiment doesn't succeed in what it claims to accomplish: demonstrating the flat-out impossibility of Strong AI.”

We can happily accept that Searle's thought experiment doesn't entirely (or even at all) succeed in what it claims to accomplish. However, Dennett's claims (or those he endorses) don't demonstrate the possibility of Strong AI either. In addition, it can also be said that the Searle himself never claimed the “flat-out impossibility of Strong AI” in the first place... Though that's another issue entirely.

The Systems Reply

It seems fairly clear that Dennett accepts the “Systems Reply (Berkeley)” argument against Searle's position. This is odd really because the Systems Reply is Searle's own take on what he thought the Opposition believed to be wrong with his own argument. (At least as it was first stated in the early days.) In other words, these aren't the actual words of any of the Opposition.

This is how Dennett himself quotes Searle in full:

“... 'While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of the whole system, and the system does understand the story.'...”

So what is that “whole system”? This:

“... 'The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has a 'data bank' of sets of Chinese symbols.'...”

I suppose it would be pretty obvious that if Searle put himself in a “system” (even if he had a large ledger of written rules, paper and pencils for doing calculations and data banks of Chinese symbols), it would still be Searle himself who'd be making use of all these elements of that system. Thus, in that sense, the original problem seems to be replicated. That is,

If Searle didn't originally understand Chinese

then

Searle + a large ledger, data banks, etc. wouldn't understand Chinese either.

That's because it is Searle himself, after all, who's making use of - and attempting to understand - these separate parts of the system. And even when the parts are taken together, it's still Searle who's taking them together and Searle who's doing the understanding. Thus the system doesn't seem to add anything other than a set of tools and data banks which Searle himself makes use of.

If all that's correct, then it's understandable that Searle-outside-the-room (i.e., Searle qua philosopher) should have a problem with this conclusion. So here's Dennett quoting Searle again:

“... 'Now, understanding is not being ascribed to the mere individual [Searle-in-the-room]; rather it is being ascribed to this whole system of which he is a part.'...”

To repeat. It's Searle-in-the-room who's making use of the whole system. Thus it's also Searle-in-the-room who's both using the system's parts and doing any understanding of its separate parts and the system taken as a whole.

Dennett's Examples

As stated, the Systems Reply simply seems to replicate the original problem - except for the addition of extra parts in order to create a system. Nonetheless, Dennett does indeed appear to believe that the addition of extra parts is of importance to this issue.

Firstly, instead of talking about Searle-in-the-room and the extra other things in that room, he now gives the example of a “register machine”.

Dennett says that 


“the register machine in conjunction with the software does perfect arithmetic”. 

So now we have this:

the register machine + software = a system capable of “perfect arithmetic”

Of course that's just like the following:

Searle-in-the-room + data banks + etc. =  a system capable of understanding Chinese

And then Dennett offers another equivalent example:

the central processing unit (CPU) + chess programme = a system capable of “beating you at chess”

Since Dennett is a behaviourist and a verificationist, his position seems to simply bypass Searle's central argument. So what is Dennett's behaviourist and a verificationist position? This:

If 

Searle-in-the-room delivers correct answers in Chinese, the register machine does perfect arithmetic, and the computer beats someone at chess, 

then 

Searle-in-the-room, the register machine and the computer understand (respectively) Chinese, arithmetic and chess. 

That is, Searle-in-the-Room, the register machine and the computer behave in a way that a True Understander would behave. Thus, to Dennett, they must be True Understanders.

Indeed Dennett is explicit about his verificationist and behaviourist position when he mentions that ultimate behaviourist and verificationist test – the Turing test. (Of course Dennett doesn't – as far as I know - call himself a “behaviorist” or a “verificationist”.) Actually, as with the Systems Reply, Dennett  quotes Searle again (this time only in part). Dennett writes:

If the judge can't reliably tell the difference, the computer (programme) has passed the Turing test and would be declared not just to be intelligent, but to 'have a mind in exactly the sense that human beings have minds,' as Searle put it in 1988.” 

Now since Dennett doesn't argue against this account and description - or the conclusion - of the Turing test, then surely he must accept it. 

Dennett would very happily accept that the a computer which had passed the Turing test is “intelligent”. (Indeed I think that too; depending on definitions.) However, I don't believe that Dennett needs also to accept Searle's addition. That is, I don't believe that Dennett needs to believe that this particular computer 

“ha[s] a mind in exactly the sense that human being have minds”.

Firstly, this particular computer might have passed an extremely rudimentary test. Thus it couldn't possibly be said to “have a mind in exactly the sense that human beings have minds”. Perhaps it has a mind. However, how would we know that? And how could we also say that this computer has a mind that's "exactly" the same as all human minds or exactly the same any any particular human mind?

Secondly, surely Dennett would accept that there's more to human minds than merely answering questions. This may mean that the best that can be said is that this computer has a type of mind. Perhaps if this (or any) computer were more extensively tested (or if it accomplished different things other than answering questions), then this would take the computer towards having a mind which is very much like a human mind.

So this particular computer, after this particular test, can be said to have a kind of a mind; just not a mind that can be said to be the same as a human mind (i.e., in all respects).

However (as stated), perhaps Dennett's wouldn't see the point of my qualification. That is, after this particular computer had passed this particular test, then perhaps Dennett would indeed have said that it (to use Searle's words again) “has[s] a mind in exactly the same sense that human beings have minds”.

As before, whatever Dennett's exact position, he puts the Strong AI position on the Turing test without criticising or adding to it. Thus Dennett continues:

Passing the Turing test would be, in the eyes of many in the field, the vindication of Strong AI.”

So why is that? According to Dennett again:

Because, they [Strong AI people] thought (along with Turing), you can't have such a conversation without understanding it, so the success of the computer conversationalist would be proof of its understanding.”

Again, this is to judge this computer according to purely behaviourist logic. That is, if the computer answers the questions correctly, then that's literally all there is to it. It must also understand the questions. As for verificationism. All we have is the computer's behaviour to go on. There's nothing else to verify or to postulate.

Zombies/Zimbos

Dennett's behaviourist and verificationist position on this particular computer (as well as its Turing test) can also be seen as being analogous to those philosophers' zombies he also has a problem with.

Actually, Dennett calls such a zombie a “zimbo”. A zimbo is an entity which is physically, functionally and behaviourally exactly like us. However, a zimbo is still meant to be lacking a certain... something.

More relevantly, the zimbo can pass the Turing test too. (Or at least the specific Turing test which the aforesaid computer passed.) That is, the Turing test and you and I

can't distinguish between a zimbo and a 'really conscious' person, since whatever a conscious person can do, a zimbo can do exactly as well”.

So just as this computer doesn't need that extra something, neither does Dennett's zimbo. In both cases, all we have is the behaviour of the computer and this zimbo. And their behaviour tells us that they're both intelligent and indeed that both have a mind.

In fact Dennett seems to go one step further than that.

Dennett moves swiftly from the computer and the zimbo being intelligent (or having intelligence), to their both being “conscious”. In Dennett's own words:

[T]he claim that an entity that passes the Turing test is not just intelligent but conscious.”

As stated before, Dennett seems to be putting the Strong AI position. He also seems to be endorsing that position. And this appears to be the case because Dennett neither argues against this position nor does he really add to it.

*********************************

*) See my: 'Against Daniel Dennett's Heterophenomenology'.


"This piece is a critical account of the 'Heterophenomenology' chapter of Daniel Dennett's book, Intuition Pumps and Other Tools For Thinking."