John Searle accuses many of those who
accuse him of being a “dualist” of being... dualists. Bearing in mind the philosophical ideas discussed in the following, his stance isn't a surprise.
Searle's
basic position on this is:
i) If Strong AI
proponents, computationalists or functionalists, etc. ignore or play
down the physical biology of brains; and, instead, focus exclusively
on syntax, computations and functions (the form/role rather than the
physical embodiment),
ii)
then that will surely lead to some kind of dualism in which
non-physical abstractions basically play the role of Descartes'
non-physical and “non-extended” mind.
Or
to use Searle's
own words:
"If
mental operations consist of computational operations on formal
symbols, it follows that they have no interesting connection with the
brain, and the only connection would be that the brain just happens
to be one of the indefinitely many types of machines capable of
instantiating the program. This form of dualism is not the
traditional Cartesian variety that claims that there are two sorts of
substances,
but it is Cartesian in the sense that it insists that what is
specifically mental about the brain has no intrinsic connection with
the actual properties of the brain. This underlying dualism is masked
from us by the fact that AI literature contains frequent fulminations
against 'dualism'.”
Searle
is noting the radical disjunction created between the actual
physical reality of
biological brains and how these philosophers and scientists explain
and account for mind, consciousness and understanding.
So
Searle doesn't believe that only biological brains can give rise to
minds, consciousness and understanding. Searle's position is that, at present, only
biological
brains do
give rise to minds, consciousness and understanding. Searle is emphasising an empirical fact; though
he's not denying the logical and metaphysical possibility that other
things can bring forth mind, consciousness and understanding.
Searle
is arguing that the biological brain is played down or even ignored
by those in AI, cognitive science generally and many in the philosophy of mind. And
when put that bluntly, it seems like an almost perfect description of
dualism. Or, at the very least, it seems like a stance (or position)
which would help advance a (non-Cartesian) dualist philosophy of
mind.
Yet
because those people just referred to (who're involved in artificial intelligence, cognitive science generally and the philosophy of mind) aren't committed to what used to be called
a “Cartesian ego” (they don't even mention it), then the charge of
“dualism” seems – superficially! - to be unwarranted. However,
someone can be a dualist without being a Cartesian
dualist.
Or, more accurately, someone can be a dualist without that someone
positing some kind of non-material substance formerly known as the
Cartesian
ego.
However, just as the Cartesian ego is non-material, non-extended
(or non-spatial) and perhaps also abstract; so too are the
computations and the (as Searle puts it) “computational operations
on formal symbols” which are much loved by those involved in AI,
cognitive science and whatnot.
Churchland on Functionalism as Dualism
Unlike
Searle, Patricia Churchland doesn't actually use the word “dualist” for her opponents;
though she does say the
following:
“Many philosophers
who are materialists to the extent that they doubt the existence of
soul-stuff nonetheless believe that psychology ought to be
essentially autonomous from neuroscience, and that neuroscience will
not contribute significantly to our understanding of perception,
language use, thinking, problem solving, and (more generally)
cognition.”
Put
in Churchland's way, it seems like an extreme position. Basically,
how could “materialists” (when it comes to the philosophy of mind
and psychology) possibly ignore the brain?
It's one thing to say that
It's another thing to say that psychology is
and that
Sure, the division of labour idea is a good thing. However, to see the “autonomous” in “autonomous science” as being about complete and total independence is surely a bad idea. In fact it's almost like a physicist stressing the independence of physics from mathematics.
It's one thing to say that
“psychology is distinct from neuroscience”.
It's another thing to say that psychology is
“autonomous from neuroscience”
and that
“neuroscience will not contribute significantly to our understanding” of cognition.
Sure, the division of labour idea is a good thing. However, to see the “autonomous” in “autonomous science” as being about complete and total independence is surely a bad idea. In fact it's almost like a physicist stressing the independence of physics from mathematics.
Churchland
thinks that biology matters. In this she has the support of many
others.
For example, the Nobel laureate Gerald Edelman says that the mind
For example, the Nobel laureate Gerald Edelman says that the mind
“can only be
understood from a biological standpoint, not through physics or
computer science or other approaches that ignore the structure of the
brain”.
In
addition, you perhaps wouldn't ordinarily see Patricia Churchland and John Searle
as being bedfellows; though in this issue they are. So it's worth
quoting a long passage from Searle which neatly sums up some of
the problems with non-biological theories of mind. He
writes:
“I believe we are
now at a point where we can address this problem as a biological
problem [of consciousness] like any other. For decades research has
been impeded by two mistaken views: first, that consciousness is just
a special sort of computer program, a special software in the
hardware of the brain; and second that consciousness was just a
matter of information processing. The right sort of information
processing -- or on some views any sort of information processing ---
would be sufficient to guarantee consciousness..... it is important
to remind ourselves how profoundly anti-biological these views are.
On these views brains do not really matter. We just happen to be
implemented in brains, but any hardware that could carry the program
or process the information would do just as well. I believe, on the
contrary, that understanding the nature of consciousness crucially
requires understanding how brain processes cause and realize
consciousness.. ”
In
a sense, then, if one says that biology matters, one is also saying
that functions aren't everything (though not that functions
are nothing). Indeed Churchland takes this position to its
logical conclusion when she more or less argues that in order to
build an artificial brain one would not only need to replicate its
functions: one would also need to replicate everything physical about it.
Here
again she has the backup of Searle. He writes:
“Perhaps when we
understand how brains do that, we can build conscious artifacts using
some non-biological materials that duplicate, and not merely simulate,
the causal powers that brains have. But first we need to understand
how brains do it.”
Of
course it can now be said that we can have an artificial mind without
having an artificial brain. Nonetheless, isn't it precisely this position
which many dispute (perhaps Churchland does too)?
In
any case, to use Churchland's own words on this subject, she says
that
“it may be that if
we had a complete cognitive neurobiology we would find that to build
a computer with the same capacities as the human brain, we had to use
as structural elements things that behaved very like neurons”.
Churchland
continues by saying that
“the artificial
units would have to have both action potentials and graded
potentials, and a full repertoire of synaptic modifiability,
dendritic growth, and so forth”.
It
gets even less promising for functionalism when Churchland says that
“for all we know
now, to mimic nervous plasticity efficiently, we might have to mimic
very closely even certain subcellular structures”.
Put
that way, Churchland makes it sound as if an artificial mind (if not artificial intelligence) is still a
pipe-dream.
Readers
may also have noted that Churchland was only talking about the
biology of neurons, not the biology of the brain as a whole. However,
wouldn't the replication of the brain (as a whole) make this whole
artificial-mind endeavor even more complex and difficult?
In
any case, Churchland sums up this immense problem by saying that
“we simply do not
know at what level of organisation one can assume that the physical
implementation can vary but the capacities will remain the same”.
That's
an argument which says that it's wrong to accept the
implementation-function “binary opposition” (to use a phrase from Jacques Derrida) in the first place. Though that's not to say - and
Churchland doesn't say - that it's wrong to concentrate on functions
or cognition generally. It's just wrong to completely ignore the
“physical implementation”. Or, as Churchland says at the
beginning of one paper, it's wrong to “ignore neuroscience” and
focus entirely on function.
Churchland
puts the icing on the cake herself by stressing function. Or, more
correctly, she stresses the functional levels which are often ignored
by functionalists.
Take
the cell or neuron. Churchland writes
that
“even at the level
of cellular research, one can view the cell as being the functional
unit with a certain input-output profile, as having a specifiable
dynamics, and as having a structural implementation in certain
proteins and other subcellular structures”.
Basically,
what's being said here is that in many ways what happens at the macro
level of the mind-brain (in terms of inputs and outputs) also has an
analogue at the cellular level. In other words, functionalists are
concentrating on the higher levels at the expense of the lower
levels.
Another
way of putting this is to say what Churchland herself argues: that
neuroscientists aren't ignoring functions at all. They are,
instead, tackling biological functions, rather than abstract
cognitive functions.
No comments:
Post a Comment