Monday, 27 March 2017

Integrated Information Theory: From Consciousness to the Brain (2)

Integrated Information Theory (IIT) demands a physical explanation of consciousness. This rules out, for example, entirely functional explanations; as well as unwarranted correlations between consciousness and the physical. Indeed if consciousness is identical to the physical (not merely correlated with it or caused by it), then clearly the physical (as information, etc.) is paramount in the IIT picture.

All this is given a quasi-logical explanation in terms of axioms and postulates. That is, there must be identity claims between IIT's "axioms of consciousness" and postulates about the physical. Moreover, the axioms fulfill the role of premises. These premises lead to the physical postulates.

So what is the nature of that relation between an axiom and its postulate? How do we connect, for example, the conscious state with the neuroscientific explanation of that conscious state? How is the ontological/explanatory gap crossed?

As hinted at earlier, the identity of consciousness and the physical isn't a question of the latter causing or bringing about the former. Thus, if x and y are identical, then x cannot cause y and y cannot cause x. These identities stretch even as far as phenomenology in that the phenomenology of consciousness at time t is identical with the physical properties described by the postulates at time t.

More technically, Giulio Tononi (2008) identifies conscious states with integrated information. Moreover, when information is integrated (by whichever physical system – not only the brain) in a complicated enough manner (even if minimally complicated), that will be both necessary and sufficient to constitute (not cause or create) a conscious state or experience.

Explaining IIT's identity-claims (between the axioms of consciousness and the physical postulates) can also be done by stating what Tononi does not believe about consciousness. Tononi doesn't believe that

i) the brain's physical features (described by the postulates) cause or bring about consciousness.
ii) the brain's physical features (described by the postulates) are both necessary and sufficient for consciousness.


Where we have the physical, we must also have the causal. And indeed IIT stresses causality. If consciousness exists (as the first axiom states), then it must be causal in nature. It must “make a causal difference”. Thus epiphenomenalism, for one, is ruled out.

Again, consciousness itself must have causal power. Therefore this isn't a picture of the physical brain causing consciousness or even subserving consciousness. It is said, in IIT, that “consciousness exists from its own perspective”. This means that a conscious state qua conscious state (or experience qua experience) must have causal power both on itself and on its exterior. Indeed the first axiom (of existence) and its postulate require that a conscious state has what's called a “cause-effect power”. That is, it must be capable of having an effect on behaviour or actions (such a picking something up) as well as a “power over itself”. (Such as resulting in a modification of a belief caused by that conscious state?) This, as started earlier, clearly rules out any form of epiphenomenalism.

Now does this mean that a belief has causal powers (as such)? Does this mean that the experience of yellow has – or could have – causal powers? Perhaps because beliefs aren't entirely phenomenological, and spend most of their time in the “belief box” (according to non-eliminative accounts), then they aren't a good candidate for having causal powers in this phenomenological sense. However, the experience of yellow is a casual power if it can cause a subject to pick up, say, a lemon (qua lemon).

From Consciousness to Brain Again

Even if IIT starts with consciousness, it's very hard, intuitively, to see how it would be at all possible to move to the postulated physical aspects (not bases or causes) of a conscious state. How would that work? How, even in principle, can we move from consciousness (or phenomenology) to the physical aspects of that consciousness state? If there's a ontological/explanatory gap between the physical and the mental; then there may be/is an ontological gap/explanatory gap between consciousness and the physical. (There'll also be epistemological gaps.) So how does this IIT inversion solve any of these problems?

The trick is supposed to be pulled off by an analysis the phenomenology of a conscious state (or experience) and then accounting for that with the parallel states of the physical system which is the physical aspect of that conscious state. (Think here of Spinoza and Donald Davidson's "anomalous monism" – or substance monism/conceptual dualism - is which a single substance has two "modes".) But what does that mean? The ontological/explanatory gap, sure enough, shows its face here just as much as it does anywhere else in the philosophy of consciousness. Isn't this a case of comparing oranges with apples – only a whole lot more extreme?

An additional problem is to explain how the physical modes/aspects of a conscious state must be “constrained” by the properties of that conscious state (or vice versa?). Again, what does that actually mean? In theory it would be easy to find some kind of structural physical correlates of a conscious state. The problem would be to make sense of - and justify - those correlations. For example, I could correlate my wearing black shoes with Bradford City winning away. Clearly, in this instance “correlation doesn't imply causation”. However, if IIT doesn't accept that the physical causes conscious states, but that they are conscious states (or a mode therefore), then, on this example, my black shoes may actually be Bradford City winning at home (rather than the shoes causing that win)... Of course shoes and football victories aren't modes/aspects of the same thing. Thus the comparison doesn't work.

It doesn't immediately help, either, when IIT employs (quasi?)-logical terms to explain and account for these different aspects/modes of the same thing. Can we legitimately move from the axioms of a conscious experience to the essential properties (named “postulates”) of the physical modes/aspects of that conscious experience?

Here we're meant to be dealing with the "intrinsic" properties of experience which are then tied to the (intrinsic?) properties of the physical aspects/modes of that experience. Moreover, every single experience is meant to have its own axiom/s.

Nonetheless, if an axiomatic premise alone doesn't deductively entail (or even imply) its postulate, then why call it an “axiom” at all?

Tononi (2015) explains this is terms of "inference to the best explanation" (otherwise called abduction). Here, instead of a strict logical deduction from a phenomenological axiom to a physical postulate, the postulates have (merely) statistical inductive support. Tononi believes that such an abduction shows us that conscious systems have “cause-effect power over themselves”. Clearly, behavioural and neuroscientific evidence may/will show this to be the case.


Sceptically it may be said that the "ontological gap" (or the "hard problem") appears to have been bridged (or even solved) by mere phraseology. What I mean by this is that IIT identifies a conscious state with physical things in the brain. (Namely, the physical elements and dynamics of the brain.) These things are measurable. Thus, if that's the case, then a conscious state is measurable in that the dynamical and physical reality of the brain (at a given time) is measurable. Indeed in IIT it's even said that something called the “phi metric” can “quantify consciousness”.

Is the hard problem of consciousness solved merely through this process of identification?

The IIT theorist may reply: What more do you want?! However, then we can reply: Correlations between conscious states and brain states (or even the brain's causal necessitation of a conscious state) aren't themselves explanations of consciousness. Indeed isn't the identification of conscious states with the physical and dynamical elements of the brain what philosophers have done for decades? Do IIT's new technical/scientific terms, and references to “information”, give us anything fundamentally new in this long-running debate on the nature of consciousness?

*) Next: 'Integrated Information Theory: Structure (3)'


Tononi, Giulio Tononi (2008) 'Consciousness as Integrated Information: a Provisional Manifesto'.

No comments:

Post a Comment