...Thoughts-Computations-Rules-Algorithms...
Such
cognitivists believe that the brain is also a machine – a computing
machine.
Computationalists
(computationalism is a branch of cognitivism) claim that all thought is
computation. But what does that mean? Are the words 'thought' and
'computation' are virtual (or literal) synonyms?
This
is more clearly the case because it seems that almost all conscious
processes in the brain are deemed to be thoughts; and thus also deemed
to be computations.
That not only includes the thought that 1 plus 1 equals 4 or
that Snow is white; but also the rotation of a mental image in
the mind, imagining the smell of a rose and so on. Then again, if
rotating a mental image is classed as a thought, then why
can't it also be classed as a computation? Especially since,
in computationalism, they appear to by synonyms.
It
all now depends on what we mean by the word 'computation'.
For
a start, we can make the following claims:
i)
All of a computer's processes are computations.
ii)
Not all conscious human mental processes are
computations.
Similarly,
we can say:
iii)
Many human mental processes are thoughts.
iv)
No computer computations are thoughts (i.e. because
thoughts have semantic content, intentionality, reference, etc.).
Rules Rule, Okay?
Is
everything that happens consciously in the mind a computation? Or,
perhaps more tellingly, is it all rule-governed? Jerry
Fodor doesn't think so. He says that
“some
of the most striking things that people do – ‘creative’ things
like writing poems, discovering laws, or, generally, having good
ideas – don’t feel like species of rule-governed processes”.
The
way Fodor puts his position doesn't really help matters Sure, such
things may not “feel like species of rule-governed processes”.
However, that doesn't mean that they aren't rule-governed processes.
Far from it. This is the same phenomenological approach that's
applied to free will. Here again most people “feel like” they
have free will. Though, on close inspection, that claim (about what
things feel like) amounts to almost nothing.
On
Fodor's behalf it can now be asked what something's being
rule-governed could possibly mean in the varied contexts of “writing
poems, discovering laws, or, generally, having good ideas”. Are
these disparate things really united by the following of rules (if at
the non-conscious level)? Well that would depend on what's meant by the words “rules-governed”.
We
can take this somewhat further.
If
writing poems, discovering laws and having good ideas are
rule-governed, then these creative processes must be following some
kinds of algorithm. And following on from that, they must be
computable. What's more, this could mean that these processes are
rote
in some (or sometimes all) respects. Not in the sense of conscious
acts of rote learning (or memory); but in the respect that the brain
(or physiological system) has 'acquired' certain
modules/faculties/etc. - or that such things are innate.
Haven't we simply moved from one technical term (i.e.,
'rule-governed') to two more – 'algorithms' and 'computable'? After
all, the words 'algorithms' and 'computations' can both be be cashed
out in terms of following rules or being rule-governed.
So
let's quote a definition of the word 'algorithm' as it's specifically
used in reference to computers:
“An
algorithm is basically an instance of logic written in software by
software developers to be effective for the intended 'target'
computer(s) to produce output from given input (perhaps null).”
The
mention of 'logic' (along with the very mechanical way of describing
both what an algorithm does and how it comes to be) seems to make
Fodor's earlier claim a little more convincing. Can we say that
“an instance of logic” (or instances of logic) is
required to “write poems, discover laws, or, generally, having good
ideas”? Yes, we can! It's certainly the case that - in a limited sense
- instances of logic/algorithms will be involved in these processes.
The thing is, it surely can't be said that it's all about logic or
algorithms.
We can now say that logic or algorithms can be applied to some things (or to all things!) which aren't themselves logical or algorithmic.
We can now say that logic or algorithms can be applied to some things (or to all things!) which aren't themselves logical or algorithmic.
Bad Computers
Kurt
Gödel is often brought into the picture in order to
show us what humans have and what computers (ostensibly) don't have.
For
example, there's much talk about human brains having a "rule-free
flexibility" and “unlimited mathematical abilities”. Thus there's
also talk about “intuition” and “direct insight”. Of course
these abilities can be seen to run free (conceptually speaking) of
other things that computers don't have: such as qualia, emotions and
suchlike. Then again, some would say that they all form a Gödelian
package.
In
concrete terms, there's the argument that humans can solve
computational problems which computers can't solve. (Note that this
hasn't got anything directly to do with computers not being able to
write poems or have an orgasm.) This Gödelian claim that “no such
limits apply to the human intellect” is, as Alan Turing argued in
1950,
often “merely stated, without any sort of proof”.
In
any case, various consequences are put forward as being a result of
computers not having our (as it were) Gödel faculty. They include
the fact that most computers crash for trivial reasons (e.g., because
of faulty software or bad input). This is said to be due to the
rule-fixated nature of computers; unlike human beings who have (as stated) a Gödel faculty.
All
this is seen to be a direct result of computers needing a rule or
algorithm for literally everything they do. More concretely,
computers show no intuition or insight; and, in most cases, they don't
learn from their mistakes or learn not to make mistakes. (Though this
isn't true of all computers or even all aspects of each
computer.)
Here
again philosophers stress human
uniqueness. Hubert Dreyfus,
for example, argues that there are many examples of mental activity
and behaviour that aren't a question of following rules. As Dreyfus
himself puts it, computers lack the “immediate intuitive situational
response that is characteristic of [human] expertise”.
Consequently, persons
Basically, some people argue that these Gödelian things can't be programmed into a computer.
“must depend almost entirely on intuition and hardly at all on analysis and comparison of alternatives”.
Basically, some people argue that these Gödelian things can't be programmed into a computer.
The
science writer John Horgan also tells us what computers are bad at.
He writes:
“
Computers may excel at precisely defined tasks such as mathematics
and chess.... but they still perform abysmally when confronted with
the kind of problems – recognising a face or voice or walking down
a crowded pavement – that human solve effortlessly.”
(1996).
To
state the obvious, the above are all programming problems. And they're programming problems because the number of variables the computer
(as well as a person) needs to take into account when it comes to
“recognising a face or voice or walking down a crowded pavement”
are huge (or indefinite) in number. However, persons, it can be
argued, don't (really?) need to be programmed in these cases: they
react situationally. That is, persons can act upon - and react
to - novel situations; even though (it can be said) these
situations aren't entirely novel.
The
philosopher George Rey also states the case that computers don't have
a full logical package. He writes:
“Intelligence
requires doing well under non-ideal
conditions as well... But performing well under varied conditions is
precisely what we know existing computers tend not to do.
Decreasingly ideal cases require increasingly clever inferences to
the best explanation in order for judgements to come out true; and
characterising such inferences is one of the central problems
confronting artificial intelligence...” (1986)
Here
again we see that in all the cases in which a computer doesn't have a
rule or algorithm to follow, then it doesn't know what to do. Of
course you can create rules which tell a computer what to do when there are
no existing rules; though that would depend on the nature of
these meta-rules as well as upon the new conditions the computer is
facing.
To
sum up in the language of logic: computers aren't very good at
“inferences to the best explanation” when they find themselves in
“non-ideal conditions”.
Good Computers
Nonetheless,
all sorts of new factors have been added to computers to simulate
intuition or Gödelian intelligence. Such things as quantum computers
based on “entangled qubits”,
the introduction of random factors (e.g.,
annealing approaches) and hardware neural nets have
been added into the computer-pot.
In
any case, we already know about computer randomness. Even von Neumann
machines can modify their own programmes (i.e., they can learn). That
means that some of their responses (or output) are unpredictable. All
this is achieved, in general, by equipping a computer with certain
random elements which the computer can work on to produce outputs
which are unexpected (i.e., which have moved beyond the programmed
data). Indeed all this was theorised about by Turing as long ago as
1938.
Even
with early Turing machines there was a requirement that such machines be able to follow their own rules or show what some people (at the
time) called “initiative”. It was said that a programmer could
engineer an element of randomness into the computer (or into the
programme). That was what Alan Turing himself tried to do with
his “Manchester
computer” (1948-50). That meant that such
randomness (as it were) would bring about “intuition” (or
initiative) in the Turing machine – or even free will!
So
when (not if):
i)
A random element is introduced into a Turing machine (or a computer),
ii)
and that computer manages to follow rules not laid down by the
programmer,
iii)
and as a result of that it solves its own problems,
iv)
then that computer has learned something of its own accord or it even has
“intuition”.
Thus
there's no “appearance” about it! In this limited respect, the
computer is free from its programmer. Or it has a “will” which is
independent of its programmers. This isn't to say that it has either
a mind or a (free) will in the human sense; though the independence
(or freedom) is certainly real.
I
think it would also be correct to say that a Turing machine “could
have done something else” with the same input. That is, the same
random change (mentioned by Turing) to the Turning machine can have
different results in terms of what it produces. (E.g., a different
calculation or even a different action – though a calculation is an
action of sorts.)
More
specifically in terms of today's computer programmes, there's what is
called “machine learning” in which computer programmes have the
ability to “self-modify”. These include programmes which
themselves include ensemble learning, current-best-hypothesis
learning, explanation-based learning, decision-tree learning,
reinforcement learning, Bayesian statistical learning, instance-based
learning and so on.
Despite
all that, it's still said (by some) that none of these things (not
even collectively) produce a Gödelian mind. That's because all these
additions can still be reduced to Turing machines (along with their
limitations). Sure, they make computers much better; though it's
still said that they don't make them Gödelian.
References
Dreyfus,
Herbert. (1992) What
Computers Still Can't Do
Fodor,
Jerry. (1975) The
Language of Thought Horgan, John. (1996) The End of Science
Rey, George. (1986) 'A Question about Consciousness'
Turing, Alan. (1939) 'Systems of logic defined by ordinals'
-- (1950) 'Computing Machinery and Intelligence', Mind LIX:433-460.
No comments:
Post a Comment