The
word 'information' has massively different uses; some of which tend
to differ strongly from the ones we use in everyday life. Indeed we
can use the words of Claude E. Shannon to back this up:
"It
is hardly to be expected that a single concept of information would
satisfactorily account for the numerous possible applications of this
general field." [1949]
The
most important point to realise is that minds (or observers) are
usually thought to be required to make information information.
However, information is also said to exist without minds/observers.
It existed before minds and it will exist after minds. This, of
course, raises lots of philosophical and semantic questions.
It may help to compare information with knowledge. The latter
requires a person, mind or observer. The former (as just stated), may not.
Integrated
information theory's use of the word 'information' receives
much support in contemporary physics. This support includes how such
things as particles and fields are seen in informational terms. As
for dynamics: if there's an event which affects a
dynamic system, then that event can read as information.
Indeed
in the field called pancomputationalism (just about) anything can be deemed to be information. In these cases, that
information could be represented and modelled as a computational
system.
Consciousness
as Integrated Information
It's
undoubtedly the case that Guilio
Tononi believes that consciousness simply is
information. Thus, if that's an identity statement, then we can
invert it and say that information is consciousness.
In other words,
consciousness (or experience) = information
Consciousness
doesn't equal just any kind of information; though any kind of
information (embodied in a system) may be conscious to some extent.
Tononi
believes that an informational system can be divided into its parts.
Its parts contain information individually. The whole of the system
also has information. The information of the whole system is over and
above the combined
information of its parts. That means that such extra information
(of that informational system) must
emerge
from the information contained in its parts. This, then, seems to be a
commitment to some kind of emergentism.
The
mathematical measure of that information (in an informational system)
is φ
(phi).
Not only is the system more than its parts: that system also has
degrees of informational integration.
The higher the informational integration, the more likely that
informational system will be conscious. Or, alternatively, the higher
the degree of integration, the higher the degree of consciousness.
Emergence
from Brain Parts?
Again, we
can argue that the IIT position on what it calls “phi” is a
commitment to some form of emergence
in that
an informational system is - according to Christof Koch - “more
than the sum of its parts”. This
is what he calls “synergy”. Nonetheless,
a system can be more than the sum of its parts without any commitment
to strong
emergence. After all, if four matches are shaped into a square, then
that's more than an arbitrary collection of matches; though it's not more than
the sum of its parts. (Four matches scattered on the floor wouldn't
constitute a square.) However, emergentists have traditionally
believed that consciousness is more than the sum of its/the brain's (?) parts.
Indeed, in a strong sense, it can even be said that consciousness
itself has no parts. Unlike water and its parts (individual
H20
molecules),
consciousness is over and above what gives rise to it (whatever that
is). It's been seen as a truly emergent phenomenon. Water isn't,
strictly speaking, strongly
emergent
from H20
molecules. It's a large collection of H2O
molecules. (Water = H20
molecules.) Having said, in a sense, it can be said that water does
weakly
emerge from a large collection of H20
molecules.
The
idea of the whole being more than the sum of its parts has been given
concrete form in the example of the brain and its parts. IIT tells us
that the individual neurons, ganglia, amygdala, visual cortex, etc.
each have “non-zero phi”. This means that if they're taken
individually, they're all (tiny) spaces of consciousness unto
themselves. However, if you lump all these parts together (which is
obviously the case with the human brain), then the entire brain has
more phi than each of its parts taken individually; as well as more phi
than each of its parts taken collectively. Moreover, the brain as a
whole takes over (or “excludes”) the phi of the parts. Thus the
brain, as we know, works as a unit; even if there are parts with
their own specific roles (not to mention the philosopher's
“modules”).
Causation
and Information
Information
is both causal and structural.
Say
that we've a given structure (or pattern) x.
That x has a causal effect on structure (or pattern) y.
Clearly x's
effect on y can occur without minds. (At least if you're not an idealist or an
extreme anti-realist/verificationist.)
Instead
of talking about x and y,
let's give a concrete example instead.
Take
the pattern (or structure) of a sample of DNA. That DNA sample
causally affects and then brings about the development (in particular
ways) of the physical nature of a particular organism (in conjunction
with the environment, etc.). This would occur regardless of
observers. That sample of DNA contains (or is!) information. The
DNA's information
causally brings about physical changes; which, in some cases, can
themselves be seen as information.
Some
commentators also use the word “representation” within this context.
Here information is deemed to be “potential representation”.
Clearly, then, representations are representations to minds or
observers; even if the information - which will become a
representation - isn't so. Such examples of information aren't
designed at all (except, as it were, by nature). In addition, just as
information can become a representation, so it can also become
knowledge. It can be said that although a representation of
information may be enriched with concepts and cognitive activity;
this is much more the case with information in the guise of
knowledge.
Panpsychism?
The
problem with arguing that consciousness is information is that
information is everywhere: even basic objects (or systems) have a
degree of information. Therefore such basic things (or systems) must
also have a degree of consciousness. Or, in IIT speak, all such
things (systems) have a “φ value”; which is the measure of the
degree of information (therefore consciousness) in the system. Thus
David Chalmers' thermostat [1997] will thus have a degree of consciousness (or, for Chalmers,
proto-experience).
It's
here that we enter the territory of panpsychism. Not surprisingly,
Tononi is happy with panpsychism; even if his position isn't identical to Chalmers' panprotopsychism.
Scott
Aaronson,
for one,
states one problem with the consciousness-is-everywhere
idea in the following:
“[IIT]
unavoidably predicts vast amounts of consciousness in physical
systems that no sane person would regard as particularly ‘conscious’
at all: indeed, systems that do nothing but apply a low-density
parity-check code, or other simple transformations of their input
data. Moreover, IIT predicts not merely that these systems are
‘slightly’ conscious (which would be fine), but that they can be
unboundedly more conscious than humans are.”
Here
again it probably needs to be stated that if consciousness
= information
(or that information
– sometimes? - equals consciousness),
then consciousness will indeed be everywhere.
***************************************
Add-on:
John
Searle
on Information
How
can information be information without minds or observers?
John
Searle denies that there can be information without minds/observers.
Perhaps this is simply a semantic dispute. After all, the things
which pass for information certainly exist and they've been studied -
in great detail! - from an informational point of view. However, they
don't pass Searle's following tests; though that may not matter very
much.
Take,
specifically, Searle's
position
as it was expressed in a 2013 review (in The
New York Review of Books)
of Christoff
Koch’s book Consciousness.
In that piece Searle complained that IIT depends on a
misappropriation of the concept [information]:
“[Koch]
is not saying that information causes consciousness; he is saying
that certain information just is consciousness, and because
information is everywhere, consciousness is everywhere. I think that
if you analyze this carefully, you will see that the view is
incoherent. Consciousness is independent of an observer. I am
conscious no matter what anybody thinks. But information is typically
relative to observers...
“...These
sentences, for example, make sense only relative to our capacity to
interpret them. So you can’t explain consciousness by saying it
consists of information, because information exists only relative to
consciousness.”
[2013]
If
information is the propagation of cause and effect within a given
system, then John Searle's position must be wrong. Searle may say,
then, that such a thing isn't information until it becomes
information
in a mind or according to observers. (Incidentally, there may be
anti-realist problems with positing systems which are completely free
of minds.)
Searle argues that causes
and effects - as well as the systems to which they belong - don't have
information independently of minds. However, that doesn't stop it
from being the case that this information can become
information because of direct observations of that information.
Anthropomorphically,
the system communicates to minds. Or minds read the system's messages.
Searle's
position on information can actually be said to be a position on what's called Shannon
information.
This kind of information is “observer-relative information”. In
other words, it doesn't exist as information until an observer takes
it as information.
Thus when a digital camera takes a picture of a cat, each photodiode
works in casual isolation from the other photodiodes. In other words,
unlike the bits of consciousness, the bits of a photograph (before
it's viewed) aren't integrated.
Only when a mind perceives that photo are the bits integrated.
IIT,
therefore, has a notion of “intrinsic information”.
Take
the brain's neurons. Such things do communicate with each other in
terms of causes and effects. (Unlike photodiodes?) It's said that the
brain's information isn't observer-relative. Does this
contradict Searle's position? IIT is talking about consciousness as
information not being relative to other observers; though is it relative to the brain and consciousness itself?
There's
an interesting analogy here which was also cited by Searle. In his
arguments against Strong Artificial Intelligence (strong AI) and the
mind-as-computer idea, he basically states that computers –
like information - are everywhere. He writes:
“...
the window in front of me is a very simple computer. Window open = 1,
window closed = 0. That is, if we accept Turing’s definition
according to which anything to which you can assign a 0 and a 1 is a
computer, then the window is a simple and trivial computer.” [1997]
Clearly,
in these senses, an open and shut window also contains information.
Perhaps it couldn't be deemed a computer if the window's two
positions didn't also contain information. Thus, just as the window
is only a computer to minds/observers, so too is that window's
information only information to minds/observers. The window, in
Searle speak, is an
as-if computer
which contains as-if
information. And so too is Chalmers' thermometer and Koch's
photodiode.
Here's
Searle again:
"I
say about my thermostat that it perceives changes in the temperature;
I say of my carburettor that it knows when to enrich the mixture; and
I say of my computer that its memory is bigger than the memory of the
computer I had last year."
Another
Searlian (as well as Dennettian) way of looking at thermostats and computers is that we can
take an “intentional stance” towards them. We can treat them - or
take them - as intentional (though inanimate) objects. Or we can take
them as as-if
intentional objects.
The
as-if-ness
of windows, thermostats and computers is derived from the fact that
these inanimate objects have been designed to perceive, know and
memorise. Though this is only as-if
perception, as-if knowledge, and as-if memory. Indeed it is only
as-if information. Such things are dependent on human perception, human
knowledge, and human memory. Perception, knowledge and memory require
real - or intrinsic - intentionality; not as-if intentionality.
Thermostats, windows, and computers have a degree of as-if
intentionality, derived from (our) intrinsic intentionality. However,
despite all these qualifications of as-if intentionality, as-if
intentionality is still ‘real’ intentionality (according to Searle); though it's derived
from actual intentionality.
References
Aaronson,
Scott (2014) 'Why
I am Not an Integrated Information Theorist'.
Chalmers,
David (1995) 'Facing
up to the problem of consciousness'.
Koch,
Christof (2012) Consciousness:
Confessions of a Romantic Reductionist.
Searle,
John (1997) The
Mystery of Consciousness.
Shannon,
C.E. (1949) A
Mathematical Theory of Communication.
Tononi,
Guilio (2015) 'Integrated
Information Theory'.
*)
Next: 'Integrated Information Theory: Panpsychism' (5)
No comments:
Post a Comment