[The words "experience" and "consciousness" are used interchangeably in this piece; even though they aren't synonyms.]
**************************
David Chalmers says that “information is everywhere”. Is that really the case?
As
some linguists (or pedants?) have said: “If everyone is brave,
then no one is brave.” The point being made here is that a term
only makes sense if it can be distinguished from non-examples.
However, my example is an adjective (“brave”) applied to human
persons. The word “information” is a noun.
So saying “information
is everywhere” is roughly equivalent to saying “dust is everywhere”.
Information
is surely a characteristic of things, events, conditions, etc:
rather than a thing in itself. However, none of this may matter. A
prima
facie
problem with the omnipresence of information may fade away on seeing
what David Chalmers - and other “information
theorists”
- have to say about information.
In
addition: if, as Chalmers argues,
“experience
itself as a fundamental feature of the world, alongside mass, charge,
and space-time”
then,
by definition, experience can't be exclusive to humans or animals
generally. Something that's a “fundamental feature of the world”
must literally be everywhere; just as Chalmers says about “mass,
charge, and space-time”.
This means that Chalmers' linkage of experience to information is thoroughly
non-biological.
Chalmers
also links experience - therefore information - to thermostats. A
thermostat isn't alive; yet it can still be seen as a (to use Chalmers'
words) “maximally-simple” information system.
Scott
Aaronson (referring to Integrated Information Theory), for one, states
one problem
with the experience-is-everywhere idea in the following passage:
“[IIT]
unavoidably predicts vast amounts of consciousness in physical
systems that no sane person would regard as particularly ‘conscious’
at all: indeed, systems that do nothing but apply a low-density
parity-check code, or other simple transformations of their input
data. Moreover, IIT predicts not merely that these systems are
‘slightly’ conscious (which would be fine), but that they can be
unboundedly more conscious than humans are.”
Here
again it probably needs to be stated that if experience/consciousness
= information (or that information – sometimes? - equals
experience/consciousness), then experience/consciousness must indeed
be everywhere.
However,
there's the remaining question:
If it's the latter, then we'll simply repeat all the problems we have with both the emergence of one thing from another thing and the reduction of one thing to another thing.
Is it the case that information actually is experience or is it that information brings about experience?
If it's the latter, then we'll simply repeat all the problems we have with both the emergence of one thing from another thing and the reduction of one thing to another thing.
There's also hint of this problem when Chalmers asks us “[w]hy should this
sort of processing be responsible for experience?” Here Chalmers
uses the word “responsible” (as in “responsible for
experience”). In other words, firstly we have processing: then we
have experience. So it seems - in this context at least - that processing isn't the same thing as
experience. (It is responsible for experience.) And if
processing is responsible for experience, so is information. Thus
information and experience can't be the same thing.
This
may simply be, however, a grammatical fact in that even if
information is experience, it can still be grammatically
correct to say that “information is responsible for experience”.
What
is Information?
The
word 'information' has massively different uses; some of which tend
to differ strongly from the ones we use in everyday life. Indeed we
can use the words of Claude E. Shannon to back this up. He
wrote:
"It
is hardly to be expected that a single concept of information would
satisfactorily account for the numerous possible applications of this
general field."
The
most important point to realise is that minds (or observers) are
usually thought to be required to make information information.
However, information is also said to exist without minds/observers. Some philosophers and physicists argue that information existed before human
minds; and it will also exist after human minds disappear from the universe. This, of course, raises
lots of philosophical and semantic questions.
It
may help to compare information with knowledge. The later requires a
person, mind or observer. The former (as just stated), may not.
If
we move away from David Chalmers, we can cite Guilio Tononi as
another example of someone who believes that consciousness/experience
simply is information. Thus, if that's an identity statement, then we
can invert it and say that
information is (=) consciousness.
Consciousness
doesn't equal just any kind of information; though any kind of
information (embodied in a system) may be conscious (at least to some
extent).
Indeed,
according to Tononi, the mathematical measure of that information
(in an informational system) is φ (phi).
Not only are systems more than their parts: those systems have various degrees of "informational integration". The higher the informational
integration, the more likely that informational system will be
conscious. Or, alternatively, the higher the degree of integration,
the higher the degree of consciousness.
Integrated
Information Theory (IIT) isn't only close to Chalmers' view when it
comes to information-equaling-experience, Tononi is also committed to
a form (there are many forms) of panpsychism.
The
problem (if it is a problem) with arguing that
consciousness/experience is information, and that information is
everywhere, is that (as has just been said) even basic objects (or systems) have a degree of
information. Therefore such basic things (or systems) must also have
a degree of consciousness. Or, in IIT speak, all such things
(systems) have a “φ value”; which is the measure of the degree
of information (therefore consciousness) in the system. Thus Chalmers' thermostat may also have a degree of experience. (Or, for
Chalmers, “proto-experience”.)
Clearly
we've entered the territory of panpsychism here. Not surprisingly,
Tononi is happy with panpsychism; even if his position isn't identical to
Chalmers' panprotopsychism.
Interestingly
enough, David Chalmers - in one paper at least - doesn't really tell
us what information is or what he means by the word “information”.
He does tell us, however, that “information is everywhere”. He
also tells us about “complex information processing” and
“simpler information-processing”. I suppose that in the case of a
thermostat, we can guess what information is. Basically, heat and
cold are information. Though is heat and cold information
for the
thermostat? Indeed does that matter? Or is it the case that the actions which are carried out on the heat or cold (by the thermostat) constitute information? Or, perhaps more likely, is it the physical nature (its mechanical and physical innards) of a thermostat that constitutes its information?
To slightly change the subject for a second.
To slightly change the subject for a second.
John
Searle has a problem with the overuse of the word “computation”.
He cites the example of a window as a (to use Chalmers' words again) “maximally-simple” computer. Searle
writes:
“...
the window in front of me is a very simple computer. Window open = 1,
window closed = 0. That is, if we accept Turing’s definition
according to which anything to which you can assign a 0 and a 1 is a
computer, then the window is a simple and trivial computer.”
Searle's
basic point is that just about anything can be seen as a computer.
Indeed
computers are everywhere – just like Chalmers' experience. Does
this tie in with Chalmers' position on information and
maximally-simple information-processing?
In
other words, does a window contain information? By that I don't mean
the information that may exist in a window's material and mechanical structure.
(According to many, a window - being a physical thing - must contain
information.) I mean to ask whether or not a window - like a
thermostat - has information qua
a technological device which is designed to be both opened and shut?
Searle
will of course conclude that this is an example of
information-for-us.
Searle
also has something to say about information (not just computers). He
writes:
“[Koch]
is not saying that information causes consciousness; he is saying
that certain information just is consciousness, and because
information is everywhere, consciousness is everywhere.”
This
appears to be the same as Chalmers' position. Needless to
say, Searle has a problem. He concludes:
"I
think that if you analyze this carefully, you will see that the view
is incoherent. Consciousness is independent of an observer. I am
conscious no matter what anybody thinks. But information is typically
relative to observers...
“...These
sentences, for example, make sense only relative to our capacity to
interpret them. So you can’t explain consciousness by saying it
consists of information, because information exists only relative to
consciousness.”
As
for thermostats, Searle has something to say on them too.
He
writes:
"I
say about my thermostat that it perceives changes in the temperature;
I say of my carburettor that it knows when to enrich the mixture; and
I say of my computer that its memory is bigger than the memory of the
computer I had last year."
This
means that this is a Searlian way (as with Dennett) of taking an “intentional stance”
towards thermostats. We can treat them - or take them - as
intentional (though inanimate) objects. Or we can take them as as-if
intentional objects.
The
as-if-ness of windows and thermostats is derived from the fact
that these inanimate objects have been designed to perceive, know and
act. Though this is only as-if perception, as-if knowledge, and as-if
action. (Indeed it's only as-if information.) Such things are
dependent on human perception, human knowledge, and human action.
Perception, knowledge and action require real - or intrinsic -
intentionality: not as-if intentionality. Thermostats and windows
have a degree of as-if intentionality, derived from (our) intrinsic
intentionality. However, according to Searle, despite all these qualifications of as-if
intentionality, as-if intentionality is still
‘real’
intentionality; though it's derived from actual/real
intentionality.
To
get back to Searle's position on information.
For
one, it's certainly the case that some – or even many –
physicists and mathematicians don't see information in Searle's strictly philosophical or semantic way. In addition, Integrated Information Theory's use of the word
'information' also receives much support in contemporary physics.
This support includes how such things as particles and fields are
seen in informational terms. As for thermodynamics: if there's an
event which affects a dynamic system, then that too can read as being informational input into the system.
Indeed
in the field called pancomputationalism, (just about) anything can be
deemed to be information. In these cases, that information could be
represented or modeled as also being a computational system.
Information
may well become information-for-us to such physicists.
However, it's still information before it becomes
information-for-us.
Perhaps
all this boils down to the definition of the word 'information'.
The way that some physicists define the word will
make it the case that, in Searle's terms, information need not be "observer-relative". On Searle's definition, on the other hand,
the word 'information' is defined to make it the case that
information must be – or always is – relative to persons (or
minds).
Is
there anything more to this dispute that rival definitions? Perhaps
not. However, in one sense there must be one vital distinction to be
made. If information also equals experience, then information not
being dependent on human beings makes a big difference. It means that
such information is information - and therefore experience - regardless
of what we observe or think. However, this is the panpsychist's view;
and the physicists just mentioned (those who accept that information
need not be observer-relative) don't necessarily also accept that
information is the same as experience. Indeed I suspect that most
physicists don't believe that.
Thus
we now have three positions:
i)
Information is relative to observers. (Searle's position.)
ii)
Information exists regardless of observers; though it isn't equal to
experience. (The position of some physicists and philosophers.)
iii)
Information exists regardless of observers and it is also equal to
experience. (Chalmers' position.)
A
Thermostat and its Experiences
Firstly,
let me offer Wikipedia's
definition of a thermostat:
“A
thermostat is a component which senses the temperature of a system so
that the system's temperature is maintained near a desired
setpoint...
“A
thermostat exerts control by switching heating or cooling devices on
or off, or by regulating the flow of a heat transfer fluid as needed,
to maintain the correct temperature...”
What
does Chalmers himself mean by the word 'information' when it comes -
specifically - to a thermostat? He
writes:
“Both
[thermostats and connectionist models] take an input, perform a quick
and easy nonlinear transformation on it, and produce an output.”
As
previously stated, in terms of the thermostat at least, information
is information-for-us;
not information for the thermostat itself. After all, thermostats
respond to temperature because we've designed them to do so.
Nonetheless, whatever it's doing (even if designed), it's
still doing.
That is, the thermostat is acting on information. When it's hot, it
does one thing. And when it's cold, it does another thing.
Thus
does a thermostat have as-if
information
(to use Searle's term, which is usually applied to intentionality)? Or
does it have real
(first-order)
information?
In other words, does the fact that a thermostat is designed by human
beings automatically stop it from having experiences which are
themselves determined by its informational
innards?
After
all, humans are also - in a strong sense - designed by their DNA and we
certainly have experiences. Thermostats are designed by humans: do
they have experiences?
Finally, in
one piece Chalmers tackles the case of NETtalk and asks us whether
or not it does (or could) instantiate experience. He
writes:
“NETTALK,
then, is not an instantiation of conscious experience; it is only a
model of it.”
Of
course we can now rewrite that passage in the following way:
A
thermostat, then, is not an instantiation of conscious experience;
it is a model of it.
The
question is, then, whether or not Chalmers has mixed up models with
realities (as it were). NETtalk is certainly more complex than a
thermostat. However, Chalmers has often argued that complexity in
itself (in this case at least) may not matter.
The
Appeal of Simplicity & Complexity
Chalmers
plays up simplicity. He also
plays down complexity. For example, Chalmers writes that “one
wonders how relevant this whiff of complexity will ultimately be to
the arguments about consciousness”. He goes further when he
says that
“[o]nce
a model with five units, say, is to be regarded as a model of
consciousness, surely a model with one unit will also yield some
insight”.
I
presume that a thermostat has more than “one unit”; though we'd
need to know what exactly a unit is.
Chalmers
also makes what seems to be an obvious point – at least it seems
obvious if one already accepts the information/experience link. He
writes:
“Surely,
somewhere on the continuum between systems with rich and complex
conscious experience and systems with no experience at all, there are
systems with simple conscious experience. A model with superposition
of information seems to be more than we need - why, after all, should
not the simplest cases involve information experienced discretely?”
Can
we go simpler than a thermostat? Perhaps we can if this is all about
information; though that would depend on our position on information.
What about a dot on a piece of paper which is then made completely blank (i.e., at a later stage when the dot has been erased with a
rubber)?
Chalmers
also gives a biological (or “real life”) example of this
phenomenon. He writes:
“We
might imagine a traumatized creature that is blind to every other
distinction to which humans are normally sensitive, but which can
still experience hot and cold. Despite the lack of superposition,
this experience would still qualify as a phenomenology.”
At
a prima facie level, it does indeed seem obvious that complexity matters. After all, many theorists have made a strong link
between the complexity of the brain and consciousness. Chalmers
himself acknowledges the (intuitive) appeal of complexity. He writes:
“After
all, does it not seem that this rich superposition of information is
an inessential element of consciousness?”
Of
course Chalmers then rejects this requirement for complexity.
Having
said all that, we can also quickly consider
Phillip Goff's
argument here. He argues that there may be “little
minds” (or seats of experience) in the brain, and all of them, on
their own, are very simple. Now, of course, we have the problem of
the “composition” (or "combination") of all these little minds in order to make a
big mind.
What
is Simple Experience?
When
Chalmers says that
“[w]here
there is simple information processing, there is simple experience,
and where there is complex information processing, there is complex
experience”
what
does he mean by “simple experience”? What is a simple experience?
How simple can an experience be? Can we even imagine (or conceive) of
such a thing?
I
suppose I can imagine a very simple pain. (Pain can certainly be experienced.) Or would that only be a mild pain; rather than a simple
pain? (Some philosophers have argued that there needs to be more than phenomenology for a pain to be pain.)
What
about a simple visual experience? Well, a thermostat can't have such
a thing. So what simple experiences can a thermostat have? A
thermostat is designed to physically react to the temperature.
However, does it feel the temperature? (We can think of feels which
are either strongly dependent on sense organs or feels which are purely
mental/experiential in nature.) Does a thermostat experience its
innards working? That is, does it experience itself taking in
information and then responding to that information? But what could that possibly mean? In order to experience itself taking in
information, perhaps the thermostat would need to be both an “it”
and also an it capable of experiencing itself as an it.
That, surely, goes way beyond simple experience.
What has just been said may also apply to a single-celled organism. Does it
experience taking in information and then responding to it? Can it
feel that information? It can't see or touch it. So what is
the experience of information (or the taking in and responding to it) when it comes to a single-celled organism? Sure, causal things happen
within a cell. However, things happening within a cell don't - in and
of themselves - tell us that it has an experience of things happening
within it.
Now
what about a mouse? A mouse has a brain and sensory organs. So,
obviously, it's vastly different to a single-celled organism and a
thermostat. Nonetheless, the idea of a very-simple experience is
still problematic.
No comments:
Post a Comment