The
Mind as a Computer: Syntax and Semantics
The
first thing you can say (in accordance with John Searle) is that when
a computer manipulates 0s and 1s, it doesn't know what they mean, symbolise, stand for, or what their referents
are. Indeed the 0s and 1s don’t have any semantic features.
They're purely syntactical. The only thing that matters to the
computer is the shape of '0' and '1' – nothing more. That's why, as
Searle says, that “any old symbol will do just as well”.
At
its most basic, a computer simply scans a tape. Or, if not literally
a tape (as in a Turing machine), then it scans something or other. This
tape (or this something) will only contain 0s and 1s. What can the
computer (or computer ‘head’) do to these 0s and 1s? It can
perform four operations:
- It can move the tape one square to the left.
- It can move the tape one square to the right.
- It can erase a 0 and print a 1.
- It can erase a 1 and print a 0.
Here’s where the analogy with logic comes in. Instead of logic’s rules of inference, we have a set of rules of the form “under condition C perform act A”. Rules such as "under condition C perform act A" are called the computer programme. And the purpose of the programme is to encode information. This information is encoded in the binary code of zeroes and ones.
The
computer translates the encoded information (which is in the form of
0s and 1s) into electrical impulses. It then processes these
electrical impulses (which are now bits of information) according to
the rules of the programme. We can say that the computer programme is
a set of rules for processing information (or for processing
electrical impulses).
In
a sense, if the computations or symbols have no meaning (or they
don't actually symbolise anything), then they aren't actually symbols
at all. Of course they're symbols for us; though not for
the computer itself. The only thing that matters for the computer
are the formal and syntactical features of the symbols; whether these
symbols are 0s, 1s or whatever.
According to Searle, the human mind doesn’t just manipulate symbols
(whatever those symbols are taken to be). What more is there to minds?
Well, “minds have contents”. What does content mean?
It means that if we're thinking in English (or even manipulating
English symbols such as ‘y’ and ‘s’, ‘cat’ and ‘tail’
or ‘The cat has a tail’), it's not just a question of the forms,
shapes or syntax of these symbols: we also need to know what they actually
mean. Thus in the sentence “The cat has a tail” the words ‘cat’
and ‘tail’ have references, and “has a tail” is predicated of
the subject (which is a cat). And so on.
Not
only that: some of the words have a sense. The whole sentence has a
sense (or meaning) and a truth-value. We have a semantics which
includes meaning, reference and predication; none of which matter to
a computer because this is a question of content not syntax.
That is, formal symbols alone doesn't guarantee or provide semantic
content. And without semantic content we have no mind. Thus
computers (or their programmes) aren't minds.
Searle
sums up his argument thus:
- Programs are entirely syntactical.
- Minds have a semantics.
- Syntax is not the same as, nor by itself sufficient for, semantics.
It
follows that for minds, semantics is important. Or, more
commonly, for minds meaning is important. Because computers (or their
programmes) don't have meanings (or know what their symbols mean),
then they can't be minds.
Strong
Artificial Intelligence
It's
not thought that the physical aspects of a computer can bring about or cause mind or
consciousness: the programme itself is a mind. So
this isn’t the case of emergence from the programme’s
implementation in hardware. Mind is the programme. Mind is the
software.
So
if software (or the program) is enough in itself, then of course the
hardware won’t matter when it comes to a computer being a mind or it having mental states. Anything could implement the programme. It
doesn't really matter what does so because the programme itself
constitutes mind or mental states. In computers it just happens to be
silicon chips and electrical circuits. In human beings it just so
happens to be biological brains. Of course the programme will need
some kind of hardware; though it doesn't really matter which kind of
hardware. (In the case of the brain it's ‘wetware’.)
Despite
all that, many things can be said to be computers. So to say that the
mind is like a computer (or even is a computer) may not amount to
much. Searle writes:
“For
example, the window in front of me is a very simple computer. Window
open = 1, window closed = 0. That is, if we accept Turing’s
definition according to which anything to which you can assign a 0
and a 1 is a computer, then the window is a simple and trivial
computer.”
Is it really just a question of anything we can assign 1s and 0s to
being a computer (or should I say, a digital computer)? In any case, why is it simply
just a case of 0s and 1s, why not 3s and 4s as well? Why not the
letter ‘S’ or the words ‘hat’ or ‘Jack’? Indeed why not
the symbols ‘/’ and ‘*’ instead? From what
Searle has said, these shapes or syntactic marks could work just as well. After all, it’s all
about syntax and not about what ‘*’ means or what it symbolises
or signifies.
Brain
Processes and Computations
Searle
has said that the brain is a machine. And if the brain is a machine,
it must have machine processes. So what are the brain’s machine
processes? One example would be a neuron firing; which is like "internal combustion". However, neuron firing, internal
combustion and other machine processes aren't like computation. Why
is that? Searle writes:
“…
computation is an
abstract mathematical process that exists only relative to conscious
observers and interpreters. Observers such as ourselves have found
ways to implement computation on silicon-based electrical machines,
but that does not make computation into something electrical or
chemical.”
This
means that neuron firing and internal combustion don't “exist only
relative to conscious observers and interpreters”:
computations do. Computations need to be observed and interpreted
because they're abstract mathematical processes. We can make a
distinction between computations (or abstract mathematical processes)
and the physical things which implement such things. However, we
can't make a distinction between neurons firing (or internal
combustion) and the physical things that implement them.
Neuron firings just are their physical implementations. They aren't
abstract and they're not intrinsically mathematical or intrinsically
anything other than physical and biochemical.
The
Computer’s Simulation of Mind
If
one were a behaviourist or a functionalist, then the behaviour
of computers alone would tell us if they have minds. Though, according
to Searle, this would be a simulation of minds. That's why we can
simulate minds (or the workings of minds) more precisely in
computers. But the simulation of mind is not mind. Searle writes:
“Computers
are immensely useful devices for simulating brain processes. But the
simulation of mental states is no more a mental state than the
simulation of an explosion is itself an explosion.”
That's why the zombie scenario is so popular in the philosophy of mind.
In a sense, a zombie simulates a human person by behaving or
acting like a human person. Though behaving or acting like a human
person isn't the same as being a human person. Does the parrot which says "Hello John" act or behave like a human person simply
because it simulates a greeting every time its owner arrives home
from work? Does this verbal response make the parrot a person?
Does it even have a mind simply because it can articulate the words "Hello John"? Does it understand these words? Does it know what
they mean? Indeed does a computer know what the words "Hello John" mean?
If a turd said "Hello John", would that turd have a mind? If, by
accident, the pebbles on a sea shore spelled the words "Hello John Searle",
would the sea shore or the beach have a mind?
No comments:
Post a Comment