Word-count:
556
The
philosopher John
Searle often makes
the point
that many believers in Artificial Intelligence (AI), computational
cognitive science, etc. almost entirely ignore biology. Thus (to
Searle) they become contemporary Cartesians in that they make a massive
distinction between functions/computations/algorithms/ and what exactly it is that instantiates these things in human
beings and other animals – i.e., brains and central
nervous systems!
The
other thing they play down (or even ignore) is physical embodiment
and
the myriad
interactions with the environment biological creatures experience. So
even
if a computer becomes embodied in a machine/robot (with arms, legs,
“sensory receptors”, etc.), it's still the computations/programme
that are doing all the work.
There's
more to being embodied that connecting artificial arms, legs, ears,
eyes, etc. to a computer or central
processing unit
(which
could be a human being controlling knobs and leavers).
To
use the Welsh science journalist (editor-in-chief of the New
Scientist)
Alun
Anderson's
words, attaching arms or fingers, ears, etc. to a computer/robot
doesn't actually give rise to real “extended tactile experiences”.
Basically, to have genuine minds, we need genuine tactile
experiences.
In
full, this
is how
Alun Anderson expressed one problem:
“If
bodies and their interaction with brains and planning for action in
the world are so central to human kinds of mind, where does that
leave the chances of creating an intelligent disembodied mind inside
a computer? Perhaps the Turing test will be harder than we think.”
The
position above can be put simply. We don't simply have brains. What
we have is organs with sensory receptors “sending messages” to
brains. In terms of “tactile experiences”, those message are more
or less immediate. Not only that: they're very responsive to the
multitude of extreme particularities of specific environmental or
bodily encounters. And then organisms can, at various time-lengths,
adapt or change to such specific environmental or bodily encounters.
Of
course computers-in-robots can respond to the environment. Though
how does that compare to what's just been described? We can accept
that it compares to a degree – but to what degree?
As
it is, it can be said that robots do
already interact with their environments. So what's missing? Do these
robots have genuine “tactile experiences”?
What
brings about genuine tactile experiences? In the case of human beings
and other animals, it's basically biology. Biology is obviously
missing from robots. So is biology that important? What is it about
human and animal biology that adds something to that link between the
environment and the brain/mind that isn't achieved by a computer/person in -
or running - a robot?
Anderson
ties “extended tactile experiences” to the “understanding of
language”. He must believe that computers don't truly
understand
language (a la John
Searle). True, computers physically respond to language as input and
then produce output. Though that may not be genuine understanding.
(Yes: “What is genuine understanding?”) In Anderson's words,
computers “cannot say anything meaningful” even when there are
such things as computer languages. (Say, when computers are fed
natural-language information and then produce some kind of
natural-language output.) As stated, the missing link here, according
to Anderson, is “extended tactile experiences”. Computers - and
computers embedded in robots - don't have tactile experiences and
therefore they don't truly understand languages.
This
is a simple (or simply-put)
picture. Is Alun Anderson correct?
No comments:
Post a Comment