According
to Paul Smolensky, there are two main rivals: perception and
logical inference.
One
can immediately ask if there can be such a simple categorisation of
something as broad as intelligence or cognition. Indeed at a prima
facie level one can say that perception doesn't seem to be
cognitive at all. It may be a basis for cognition; though is it
actually cognition itself?
Logical
inference, on the other hand, is clearly cognitive in nature. Though here again logical
inference may utilise perception; just as perception itself can be
cognitively enriched.
Perception
Perception
is deemed by Smolensky to be “subsymbolic”. Thus if it's
subsymbolic mustn't it also be sub-logical or even sub-cognitive? Though
that would be, perhaps, to beg the question. Why assume that
all cognition, or at least intelligence, must somehow be
symbolic in nature? Then again, x's being sub-symbolic isn't the
same as it being non-symbolic... or perhaps it is.
In
any case, that which is deemed to be sub-symbolic (by Smolensky) is
still about the “categorisation of other perceptual processes”.
So here is can be seen that perception isn't viewed as being basic. It
isn't been said here (as stated earlier) that perception is - in and
of itself - a question of logical inferences or cognition. What we
have here is the “categorisation” of “perceptual processes”.
So there's categorisation which seems to be above about beyond
perceptions themselves.
Perception
& Evolution
There
are various things which work to the advantage of seeing things
primarily in terms of perception rather than in terms of logical
inference.
One,
that logical inference/reasoning came after the “categorisation of
perceptual processes” in our evolutionary history. Or as Smolensky
puts it:
“An
evolutionary argument says that the hard side of the cognitive
paradox evolved later, on top of the soft side...”
That
must mean that there were cognitive processes which predated the
higher processes of logical inference/reasoning and indeed of
language-use. Indeed surely this must have been the case. Homo
sapiens (or the species which grew into homo sapiens) surely couldn't have
been logical reasoners from the very beginning. Homo sapiens (or
their forbears) couldn't have started off as language-users or
logical reasoners. Indeed all the evidence says that this wasn't -
and couldn't have been - the case.
Basically,
both language and logical inference must have been built upon such
things as the categorisation of perceptual processes (as well as upon
much else). Language and logical inference didn't occur ex
nihilo.
Smolensky's further point arises from all that's just been said.
If
this evolutionary account is correct, then it's not a surprise that
“it
is much easier to see how the kind of soft systems that connectionist
models represent could be implemented in the nervous system”.
After
all, isn't it the case that our nervous system today is basically
as it was before we acquired language and the skill of logical
reasoning? Thus even though cognition and mentality has changed, our
biological hardware hasn't. Thus if our biological hardware predates
symbolic processing, then perhaps our models should also do so.
Symbolisation and computation may be a part of our cognition; though
the biological nervous system that subserves all this was designed
(in the evolutionary sense!) for other things.
Connectoplasm
& Symbols
Some
people believe that connectionists reject mental symbols and
everything that goes with them. And, by virtue of that, they also
believe that connectionists reject computation – at least as
the primary basis of cognition.
In
Smolensky's case, that isn't the case. Indeed he talks about
“building symbols” out of “connectoplasm”. In his view,
symbols arising from connectoplasm is a better idea than symbols
arising from... well, he doesn't really say. From the Language of
Thought or something similarly symbol-based?
In
any event, Smolensky writes:
“With
any luck we will even have an explanation how the brain builds
symbolic computation. But even if we do not get that directly, it
will be the first theory of how to get symbols out of anything that
remotely resembles the brain...”
It's
clear here that Smolensky is creating a theory (or model) that's
biologically feasible; unlike many of the alternatives. Of course it
will need to be said exactly how and why it's biologically feasible.
Or, in Smolensky's words, we'd need to know “how the brain builds
symbolic computation”. In fact we'd need to know exactly what he
means by the words “the brain builds symbolic computation”.
In
any case, the point to stress here is that Smolensky does believe
that the brain builds symbols. So, at the very least, symbols are
part of Smolensky's connectionism.
Despite
that, elsewhere in the same paper Smolensky de-stresses the
importance of symbols. Basically he wants “formal accounts” of
“continuous mathematics” rather than the “discrete mathematics”
of much “traditional symbolic formalism”. In more detail,
Smolensky writes:
“...
my characterisation of the goal of connectionist modelling is to
develop formal models of cognitive processes that are based on the
mathematics of dynamical systems continuously evolving in time:
complex systems of numerical variables governed by differential
equations.”
There's
no mention here of symbols or even of quasi-symbols. In fact this
account sounds both mechanical and biological in nature; strange as
that may seem. Though why shouldn't the biological also be seen as
mechanical or at the least as dynamical? And if we're talking of the
mechanical or the dynamical, then it stands to reason that
mathematics, “numerical variables” and “differential equations”
- rather than symbols – will be primary. Indeed it seems that in
even simpler terms, this is more about the measurement of dynamical
systems rather than the symbols within a symbol-system (i.e. the mind-brain).
It
may appear to the case that causation is also of prime importance.
That is, we have the numerical measurements of the interplay between
the environment (in terms of input) and a dynamical system which will
result in certain internal states and then certain kinds of output.
Reference
Smolensky,
Paul, 'The
Constituent Structure of Connectionist Mental States: A
Reply to Fodor and Pylyshyn' (1988)
No comments:
Post a comment