Word Count: 4218
i) Introduction
ii) Implementation
iii) Causal Structure
iv) The Computer's Innards
v) Chalmers' Biocentrism?
vi) The Chinese Room
It's a good thing that the abstract and the concrete (or abstract objects in "mathematical space" and the "real world") are brought together in David Chalmers' account of Strong AI. Often it's almost (or literally) as if AI theorists believe that (as it were) disembodied computations can themselves bring about mind or even consciousness. (The same can be said, though less strongly, about functions or functionalism.) This, as John Searle once stated1, is a kind of contemporary dualism in which abstract objects (computations/algorithms – the contemporary version of Descartes' mind-substance) bring about mind and consciousness on their own.
To
capture the essence of what Chalmers is attempting to do we can
quote his own words when he says that it's
all about “relat[ing] the abstract and concrete domains”. And he
gives a very concrete example of this.
Take
a recipe for a meal. To Chalmers, this recipe is a “syntactic
object[]”. However, the meal itself (as well as the cooking
process) is an “implementation” that occurs in the “real
world”.
So, with regard to Chalmers' own examples, we need to tie "Turing machines,
Pascal programs, finite-state automata", etc. to"[c]ognitive
systems in the real world" which
are
"concrete
objects, physically embodied and interacting causally with other
objects in the physical world".
In
the above passage, we also have what may be called an "externalist"
as well as "embodiment" argument against AI abstractionism.
That is, the creation of mind and consciousness is not
only about the computer/robot itself: Chalmers' "physical
world" is undoubtedly part of the picture too.
Of
course most adherents of Strong AI would never deny that such
abstract objects need to be implemented in the "physical world".
It's just that the manner of that implementation (as well as the
nature of the physical material which does that job) seems to be seen
as almost – or literally – irrelevant.
Implementation
It's
hard to even comprehend how someone could believe that a programme
(alone) could be a candidate for possession of (or capable of
bringing about) a conscious mind. (Perhaps no one does believe that.) In fact
it's hard to comprehend what that could even mean. Having said that,
when you read the AI literature, that appears to be exactly what some
people believe. However, instead of the single word “programme”,
AI theorists will also talk about “computations”, “algorithms”,
“rules” and whatnot. But these additions still amount to the same
thing – “abstract objects” bringing forth consciousness and
mind.
So
we surely need the (correct) implementation of such abstract objects.
It's
here that Chalmers himself talks about implementations. Thus:
“Implementations
of programs, on the other hand, are concrete systems with causal
dynamics, and are not purely syntactic. An implementation has causal
heft in the real world, and it is in virtue of this causal heft that
consciousness and intentionality arise.”
Then
Chalmers delivers the clinching
line:
“It is the program that is syntactic, it is the
implementation that has semantic content.”
More
clearly, a physical machine is deemed to belong to the semantic
domain and a syntactic machine is deemed to be abstract. Thus a physical machine is said to provide a “semantic interpretation”
of the abstract syntax.
Yet
how can the semantic automatically arise from an implementation of
that which is purely syntactic? Well, that depends. Firstly, it may
not automatically
arise. And, secondly, it may depend on the nature (as well as
physical material) of the implementation.
So
it's not only about implementation. It's also about the fact that any
given implementation will have a certain “causal structure”. And
only certain (physical) causal structures will (or could) bring forth
mind.
Indeed,
bearing all this in mind, the notion of implementation (at least
until fleshed out) is either vague or all-encompassing. (For example,
take the case of one language being translated into another language: that too is classed as an “implementation”.)
Thus
the problem of implementation is engendered by this question:
How can something concrete implement something abstract?
Then
we need to the precise tie between the abstract and the concrete.
That, in turn, raises a further question:
Can't any arbitrary concrete/physical implementation of something abstract be seen as a
satisfactory implementation?
In
other words, does the physical implementation need to be an
“isomorphic” (a word that Chalmers' uses) mirroring (or a
precise “mapping”) of the abstract? And even if it is, how does
that (in itself) bring about the semantics?
And it's
here that we arrive at causal structure.
Causal
Structure
One
can see how vitally important causation is to Chalmers when
he says that
"both
computation and content should be dependent on the common notion of
causation".
In
other words, a computation and a given implementation will share a causal
structure. Indeed Chalmers cites the example of a Turing machine by
saying
that
"we
need only ensure that this formal structure is reflected in the
causal structure of the implementation".
He
continues:
"Certainly,
when computer designers ensure that their machines implement the
programs that they are supposed to, they do this by ensuring that the
mechanisms have the right causal organization."
"A
physical system implements a given computation when the causal
structure of the physical system mirrors the formal structure of the
computation."
Then
Chalmers goes into more detail:
"A
physical system implements a given computation when there exists a
grouping of physical states of the system into state-types and a
one-to-one mapping from formal states of the computation to physical
state-types, such that formal states related by an abstract
state-transition relation are mapped onto physical state-types
related by a corresponding causal state-transition relation."
Despite
all that, Chalmers says that all the above is "still a little
vague". He does so because we need to "specify the class of
computations in question".
Chalmers
also stressed the notion of (correct) "mapping". And what's
relevant about mapping
is
that the
"causal
relations between physical state-types will precisely mirror the
abstract relations between formal states".
Moreover:
"What
is important is the overall form of the definition: in particular,
the way it ensures that the formal state-transitional structure of
the computation mirrors the causal state-transitional structure of
the physical system."
Thus
Chalmers stresses causal structure. More specifically, he gives the
example
of
“computation mirroring the causal organisation of the brain”.
Chalmers also
states:
“While it may be plausible that static sets of
abstract symbols do not have intrinsic semantic properties, it is
much less clear that formally specified causal processes cannot
support a mind”.
In
Chalmers' account, the concrete does appear to be primary in that the
"computational descriptions are applied to physical systems"
because "they effectively provide a formal description of the
systems' causal organisation". In other words, the computations
don't come
first and
only then is there work done to see how they can be implemented.
So
what is it, exactly, that's being described?
According
to Chalmers, it's physical/concrete "causal organisation".
And when described, it becomes an "abstract causal
organisation". (That's if the word "causal" can be
used at all in conjunction with the word "abstract".) However,
it is abstract in the sense that all peripheral non-causal and
non-functional aspects of the physical system are simply factored
out. Thus all we have left is an abstract remainder. Nonetheless,
it's still a physical/concrete system that provides the input (as it
were) and an abstract causal organisation (captured computationally)
that effectively becomes the output.
The
Computer's Innards
Chalmers
firstly states the problem. He
writes:
"It
is easy to think of a computer as simply an input-output device, with
nothing in between except for some formal mathematical
manipulations."
However:
"This
was of looking at things... leaves out the key fact that there are
rich causal dynamics inside a computer, just as there are in the
brain."
So
what sort of causal dynamics? Chalmers continues:
"Indeed,
in an ordinary computer that implements a neuron-by-neuron simulation
of my brain, there will be real causation going on between voltages
in various circuits, precisely mirroring patterns of causation
between the neurons."
In
more detail:
"For each neuron, there will be a memory location that represents the neuron, and each of these locations will be physically realised in a voltage at some physical location. It is the causal patterns among these circuits, just as it is the causal patterns among the neurons in the brain, that are responsible for any conscious experience that arises."
And
when Chalmers compares silicon chips to neurons, the result is very
much like a structuralist position in the philosophy of physics.
Basically,
entities/things (in this case, neurons and silicon chips) don't
matter. What matters is “patterns of interaction”. These things
create “causal patterns”. Thus neurons in the biological brain display certain causal interactions
and interrelations. Silicon chips (suitably connected to each other)
also display causal interactions and interrelations. So perhaps both
these sets of causal interactions and interrelations can be (or may
be) the same. That is, they can be structurally the same; though the
physical material which bring them about are clearly different (i.e.,
one is biological the other is non-biological).
Though
what if the material substrate
does matter? If it does, then we'd need to know why it matters. And if it
doesn't, then we'd also need to know why.
Biological
matter is certainly very complex. Silicons chips (which have just
been mentioned) aren't as complex. Remember here that we're matching
individual silicon chips with individual neurons: not billions of
silicon chips with the the billions of neurons of the entire brain.
However, neurons, when taken individually, are also highly
complicated. Individual silicon chips are much less so. However, all
this rests on the assumption that complexity - or even maximal
complexity - matters to this issue. Clearly in the thermostat case (as
cited by
Chalmers himself), complexity isn't a fundamental issue or
problem. Simple things exhibit causal structures and causal
processes; which, in turn, determine both
information
and - according to Chalmers - very simple phenomenal experience.
Chalmers'
Biocentrism?
Nobel
laureate Gerald Edelman once
said
that mind and consciousness
“can
only be understood from a biological standpoint, not through physics
or computer science or other approaches that ignore the structure of
the brain”.
Of
course many workers in AI disagree with Edelman's position.
We
also have John
Searle's
position, as expressed in
the following:
"If
mental operations consist of computational operations on formal
symbols, it follows that they have no interesting connection with the
brain, and the only connection would be that the brain just happens
to be one of the indefinitely many types of machines capable of
instantiating the program. This form of dualism is not the
traditional Cartesian variety that claims that there are two sorts of
substances, but it is Cartesian in the sense that it insists that
what is specifically mental about the brain has no intrinsic
connection with the actual properties of the brain. This underlying
dualism is masked from us by the fact that AI literature contains
frequent fulminations against 'dualism'.”
Searle
is noting the radical disjunction created between the actual physical
reality of biological brains and how these philosophers and
scientists explain and account for mind, consciousness, cognition and
understanding.
Despite
all that, Searle doesn't believe that only biological brains can give rise to minds and consciousness. Searle's position is that only brains
do
give rise to minds and consciousness. He's emphasising an empirical fact; though he's
not denying the logical and metaphysical possibility that other
things can bring forth minds.
Thus wouldn't many AI philosophers and even workers in AI think that
this replication of the causal patterns of the brain defeats the
object of AI? After all, in a literal sense, if all we're doing is
replicating the brain, then surely there's no strong AI in the first place. Yes, the perfect replication of the brain (to create an artificial brain) would be a mind-blowing achievement. However, it would still
seem to go against the ethos of much AI in that many AI workers want
to divorce themselves entirely from the biological brain. So if we
simply replicate the biological brain, then we're still slaves to it. Nonetheless, what we would have here is something that's indeed thoroughly artificial; but which is not a true example of Strong AI.
Of
course Chalmers himself is well aware of these kinds of rejoinder. He
writes:
"Some may suppose that because my argument relies on duplicating the neural-level organisation of the brain, it establishes only a weak form of strong AI, one that is closely tied to biology. (In discussing what he calls the 'Brain Simulator' reply, Searle expresses surprise that a supporter of AI would give a reply that depends on the detailed simulation of human biology.) This would be to miss the force of the argument, however."
To be honest, I'm not sure if I understand Chalmers' reason for believing that other theorists have missed the force of his argument. He continues:
"The brain simulation program merely serves as the thin end of the wedge. Once we know that one program can give rise to a mind even when implemented Chinese-room style, the force of Searle's in-principle argument is entirely removed: we know that the demon and the paper in a Chinese room can indeed support an independent mind. The floodgates are then opened to a whole range of programs that might be candidates to generate conscious experience."
As I said, I'm not sure if I get Chalmers' point. He seems to be saying that even though one system or programme only "simulates" the biological brain, it's still, nonetheless, a successful simulation. And because it is a successful simulation, then other ways of creating conscious experience must/may be possible. That is, the computational program or system has only simulated the "abstract causal structure" of the brain (to use Chalmers' own terminology): it hasn't replicated biological brains in every respect or detail. And because it has gone beyond biological brains at least in this (limited) respect, then the "floodgates are [] opened to a whole range of programs" which may not be (mere) "Brain Simulators" or replicators.
Despite
all the above, the very replication of the brain is problematic to
start with. Take the
words
of
Patricia Churchland:
“[I]t may be that if we had a complete cognitive
neurobiology we would find that to build a computer with the same
capacities as the human brain, we had to use as structural elements
things that behaved very like neurons”.
Churchland
continues by
saying that
“the artificial units would have to have both action
potentials and graded potentials, and a full repertoire of synaptic
modifiability, dendritic growth, and so forth”.
It
gets even less promising when
Churchland
adds:
“[F]or
all we know now, to mimic nervous plasticity efficiently, we might
have to mimic very closely even certain subcellular structures.”
Readers
may also have noted that Churchland was only talking about the
biology of neurons, not the biology of the brain as a whole. However,
wouldn't the replication of the brain (as a whole) make this whole
artificial-mind endeavor even more complex and difficult?
In
any case, Churchland sums up this immense problem by
saying that
“we simply do not know at what level of organisation
one can assume that the physical implementation can vary but the
capacities will remain the same”.
The
Chinese Room
In
respect to the points just made, Chalmers also tackles the Chinese
Room argument of John Searle.
Chalmers gives us an example of what he calls “causal organisation”
which relates to the Chinese Room.
Firstly,
Chalmers states that he "supports the Systems
reply", according
to which
"the
entire system understands Chinese even if the homunculus doing the
simulating does not".
As
many people know, in the Chinese Room there is a “demon” who
deals with “slips of paper” with Chinese (or “formal”)
symbols on them. He therefore indulges in “symbol manipulation”.
Nonetheless, all this also involves an element of causal
organisation; which hardly any philosophers seem to mention. Chalmers
says that the slips of paper (which contain Chinese symbols) actually
“constitute a concrete dynamical system with a causal
organisation that corresponds directly of the original brain”.
At
first blush, it's difficult to comprehend what sort of causal
organisation is being instantiated by the demon dealing with slips of
paper with formal/Chinese symbols written on them. Sure, there are
causal events/actions
going on – but so what? How do they bring about mind or consciousness? Here again the answer seems to be that they do so
because they are “mirroring” what Chalmers calls the “original brain”. But even here it's not directly obviously how a demon
manipulating formal symbols in a room can mirror the brain in any
concrete sense.
So
we need Chalmers to help us here. He tells
us that
the “interesting causal dynamics are those which take place among
the pieces of paper”. Why? Because
they
“correspond to the neurons in the original case”. The demon (who
is stressed in other accounts of the Chinese Room scenario) is
completely underplayed by Chalmers. What matters to him is that the
demon
“acts
as a kind of causal facilitator”.
That is, the demon isn't a homunculus (or the possessor of a mind) within the
Chinese Room – or at least not a mind that is relevant to this
actual scenario. (So why use a demon in this scenario in the first
place?)
Nonetheless,
it's still difficult to comprehend how the actions of a demon playing
around with slips of paper (with formal/Chinese symbols on them) can
mirror the human brain - or anything else for that matter. I suppose
this is precisely where abstraction comes in. That is, a
computational formalism captures the essence (or causally
invariant
nature) of what the demon is doing with those slips of papers and
formal symbols.
Having
said all the above, Chalmers does say that in the Systems reply
"there is a symbol for every neuron"
This
"mapping", at first sight, seems gross. What can we draw
from the fact that there's a "symbol for every neuron"? It
sounds odd. But, of course, this is a causal story. Thus:
"[T]he
patterns of interaction between slips of paper bearing those symbols
will mirror patterns of interaction between neurons in the brain."
This
must mean that the demon's interpretation of the symbols on these
slips of paper instantiate a causal structure which mirrors the
"interaction between neurons in the brain". This is clearly
a very tangential (or circuitous) mapping (or mirroring). Still, it may
be possible... in
principle.
That is, because the demon interprets the symbols in
such-and-such a way,
then he also causally acts in such-and-such
a way. And those demonic actions "mirror [the] patterns of interaction between neurons in the brain". Thus it doesn't matter if the demon "understands" the symbols, he still ties each symbol to other symbols on other slips of paper. And thus these symbols bring about the specific physical actions of the demon. To repeat: all this occurs despite the fact that the demon doesn't "understand what the symbols mean". However, he does know how to tie (or connect) certain symbols to certain other symbols.
And, despite all that detail, one may still wonder how this brings about consciousness or mind. That is, how does such circuitous (or tangential) mirroring (or mapping) bring about consciousness or mind? After all, any x can be mapped to any y. Indeed any x & y can be mapped to any other x & y. And so on. Yet Chalmers states that
"[i]t
is precisely in virtue of this causal organization that the system
possesses its mental properties".
Don't
we have a loose kind of self-reference here? That is, a computational
formalism captures the Chinese Room scenario – yet that scenario
itself also includes formal symbols on slips of paper. So we have a
symbolic formalism capturing the actions of a demon who is himself
making sense (if without a semantics) of abstract symbols.
Another
way of putting this is to ask what has happened to the formal symbols
written on the slips of paper. Chalmers stresses the causal role of
moving the slips of paper around. But what of the formal symbols (on
those slips of paper) themselves? What role (causal or otherwise) are
they playing? His position seems to be that it's all about the demon's
physical/causal actions in response to the formal symbols he reads or
interprets.
In
any case, Chalmers stresses that it's not the demon who matters in
the Chinese Room – or, at least, the demon is not the primary
factor. What matters is
the
“causal dynamics in the [Chinese] room”. Those causal dynamics
“precisely
reflect the causal dynamics in the skull”.
And, because of this, “it no longer seems so implausible to suppose
that the system gives rise to experience”. Having said that,
Chalmers does say that "e]ventually we arrive at a system where
a single demon is responsible for maintaining the causal
organization". So despite the demon being in charge (as it were)
of causal events/actions in the Chinese Room, he still doesn't
understand the formal symbols he's manipulating.
*******************************************
Note:
1)
Thus a detour towards John Searle may be helpful here.
Searle
actually accuses those who accuse him of being a “dualist” of
being, well, dualists.
His
basic position on this is that if computationalists or
functionalists, for example, dispute the physical and causal biology
of brains and exclusively focus on syntax, computations and functions
(i.e., the form/role/structure rather than the physical embodiment/implementation), then that will surely lead to a kind of
dualism. In other words, there's a radical disjunction created here
between the actual physical and causal reality of the brains and how
these philosophers explain and account for intentionality, mind and
consciousness.
Thus
Searle's basic position on this is:
i) If Strong AI
proponents, computationalists or functionalists, etc. ignore or play
down the physical biology of brains; and, instead, focus exclusively
on syntax, computations and functions (the form/role rather than the
physical embodiment/implementation),
ii) then that will
surely lead to some kind of dualism in which non-physical
abstractions basically play the role of Descartes' non-physical and
“non-extended” mind.
Searle
is noting the radical disjunction created between the actual physical
reality of biological brains and how these philosophers and
scientists explain and account for mind, consciousness and
understanding.
No comments:
Post a Comment