I
won't define the words “free will” because, as someone once put
it, “that is an argument that never ends”. In other words, it
would take up this entire piece... and more.
All
I can do is refrain from offering an alternative definition and
simply question what other people say on this subject. In that simple
respect, definitions don't matter. Without a definition, people can
still make mistakes in what they say about free will, chaos,
indeterminism, choice, etc.
Though,
of course, if there were an agreed upon definition of the words “free
will”, many problems could - or would - be solved. The problem is
that there isn't one and perhaps there couldn't be one. After
all, the words “free will” aren't like the word “cat” or
“house”: they're closer to a word like “truth” or even “wrong”.
(Of course some philosophers have also made a song and a dance about words
like “cat” or “house”!)
There
are many newfangled defenses of free will.
For
example, it's said that complexity can explain free will in certain
respects. Chaos or quantum non-determinism have also come into the
debate in recent decades.
The
argument is that complexity is required to have a developed brain.
And large brains are required for free will. Such complexity gives a
“system” (or a person) the ability to make choices.
Does
all that guarantee free will?
I
can't see how complexity in and of itself can give us free
will. It does in the sense that if we can't decipher all the causal
antecedents of our actions, then perhaps we must be free. On the
other hand, it may not matter one bit whether or not we know all the
causal antecedents of our actions – they may still determine our
actions. After all, couldn't a simple being – or even a machine –
perform an action?
I
accept that brains are complex. Everyone does. I just don't know how
complexity alone gives us free will.
And
I don't see how you can squeeze free will out of chaos or
non-determinism either.
You
Could Have Done Otherwise
How
does the possibility that X (a person) “could have done otherwise”
give X free will? Like complexity, how does free choice spring from such
non-determinism (if that's what it is)?
How
does the possibility - or even the actuality - of making a different
promise, for example, free us from determinism? Choice X is fully
causally determined. And the choice not to do X is also fully
causally determined. The reason why X, rather than not-X, is chosen,
will be a fully causal reason (as well as vice versa).
You
can now say that if you could have made decision not to do X, rather
than to do X, that's all that matters. In other words, that choice
itself determines the freedom of the will; regardless of the fact
that both X and not-X were fully causally determined. (That's the
“common-sense position”.)
Predicting
Oneself
What
about self-prediction? If you can't predict your own actions, then
aren't your actions chaotic and uncontrolled?
Can
an agent's own measurements of his own body or brain/mind states help
the matter?
Isn't
it impossible for an agent to measure everything that leads to one of
his actions or decisions? That would seem to point away from free
will. Though other people appear to argue that it's a position in
support of free will. What such people argue is that this person (or
“system”) not being able to measure small “deviations in the
initial conditions” works towards his free will, rather than
against it?
What
about other people failing to predict our actions? Does that
give us free will?
Why
would that matter? I couldn't predict the actions of a machine or
robot; though that wouldn't give it free will. And even computer
programmers sometimes – or many times - can't predict the
calculations or actions of their computers.
Even
if someone can't predict an adult human being, that doesn't guarantee
free will. I may not be able to predict a robot or computer's
actions. In fact I won't be able to unless I've programmed it myself
and even then (see the Turing machine section) that can't be
guaranteed.
Quantum
Stuff
Human
beings can be seen as deterministic systems whose random inputs (or
the things which happen indeterministically) don't have a noticeable
impact on the system – or on how they behave at the “macro scale”.
In other words, when you interact with a human being, his quantum
nature may appear irrelevant to his general behaviour (at least in
observational terms).
What
would a “randomised input into a deterministic machine” give us
anyway? Not free will. What would it give a machine?
Slight
deviations in a system don't seem to guarantee that a system is
chaotic or indeterministic either. These deviations happen to the
system/person. The system can still be deterministic. However, if you
factor in quantum indeterminacy, it's still an open case whether or
not such a thing guarantees free will. I think the argument can go in
the opposite direction. Either that, or quantum phenomena are largely
irrelevant at the level of the cognitive (or even sub-cognitive)
systems of the brain (though I may be wrong on that). What I mean is
that even though the brain is a “quantum system” (everything is), that may not impact on the issue of free will (as such).
Turing
Machines
The
following may help with the earlier talk about introducing something
random into a system.
I'm
not saying here that a Turing machine has free will in anything like
the sense a human being may have free will. It's a parallel case
cited to try and help explain the nature of randomness as it's used
in the free will debate. I don't think Alan Turing, for one, saw what
he said - or did - in terms of free will. (He used the word
“intuition”; though, admittedly, that can be said to amount to
the same - or to a similar – thing.)
In
other words, this isn't a general point about Turing machines and
their ability to appear as persons or replicate human behaviour. It's not
a point about the Turing Test. It's a point about Turing
machines.
Think
here of early Turing machines and the requirement for them to be able
to follow their own rules or show what some people called
“initiative”. It was said that a programmer could engineer an
element of randomness into the computer (or into the programme). That
was what was Alan Turing tried to do with his “Manchester
computer”. That seems to have meant that such randomness (as it
were) would bring about “intuition” (or initiative) in the Turing
machine – or even free will!
In
any case, when (not if) a random element is introduced into a Turing
machine (a computer), and then that computer manages to follow rules
not laid down by the programmer (then as a result it solves its own
problems), then there's no “appearance” about it. In this
limited respect, it is free from its programmer. Or it has a “will”
which is independent of the humans who created it; as well as of its
programmers. This isn't to say that it has either a mind or a (free)
will in the human sense; though the independence (or even freedom) is
certainly real.
I
think it would also be correct to say that a Turing machine “could
have done something else” with the same input. That is, the same
random change (mentioned by Turing) to the Turning machine can have
different results in terms of what it produces (say, a different
calculation or even a different action – though a calculation is an
action of sorts).... What am I talking about? These things happen already with computers (or their programmes).
Politics
& Morality
Some
people believe [free will] to be a “political concept”. Then
again, perhaps it's always been a political - or at least a moral -
concept. And it might well have been (in some cases at least) that
issues in metaphysics and the philosophy of mind might have been
needed to provide the groundwork (as it were) for the political or
moral positions on free will.
Over
large time-periods philosophy has an influence on these issues.
Though the minutia rarely does. This discussion, for example, has
been more or less besides the point when it comes to the day-to-day
political and moral issues of free will.
No comments:
Post a Comment