Sunday, 5 January 2025

Anil Seth: Intelligence ≠ Consciousness ≠ Intelligence

The critics of both panpsychism and integrated information theory (IIT) rely on an implicit— although sometimes explicit — strong connection which they make between intelligence and consciousness. In basic terms, then, they believe that because very basic entities can’t be deemed to be intelligent, then they can’t be deemed to instantiate consciousness either.

(i) Introduction
(ii) Thermostats, Computers and Animals
(iii) According to Seth, Chalmers and Penrose, Functionalism Fails
(iv) John Searle on Biological Brains


Without actually mentioning panpsychism and integrated information theory (IIT), the British neuroscientist Anil Seth states the following:

“This is the assumption that consciousness and intelligence are intimately, even constitutively, linked: that consciousness will just come along for the ride.”

Now let’s take an extreme version of this position.

If intelligence and consciousness (with a stress on the former) are necessarily linked, then (depending on how “intelligence” is actually defined in the first place) thermostats can’t be conscious. And neither can ants, mice and even human babies.

Seth draws his own conclusions from this when he continues with these words:

[T]he tendency to conflate consciousness with intelligence traces to a pernicious anthropocentrism by which we over-interpret the world through the distorting lenses of our own values and experiences. *We* are conscious, *we* are intelligent, and *we* are so species-proud of our self-declared intelligence that we assume that intelligence is inextricably linked with our conscious status and vice versa.”

Seth seems to pick up on a Cartesian strand here when he adds the following warning:

“If we persist in assuming that consciousness is intrinsically tied to intelligence, we may be too eager to attribute consciousness to artificial systems that appear to be intelligent, and too quick to deny it to other systems — such as other animals — that fail to match up to our questionable standards of cognitive competence”

It’s not entirely clear that any collective “we” has ever “conflate[d] consciousness with intelligence”. Perhaps only a single philosophical tradition — i.e., Cartesianism — did so. However, it’s still of up for debate how much of an influence that tradition had outside of philosophy and science.

In addition, not all other cultures and traditions — i.e., outside the West — have had benevolent and sympathetic attitudes to all animals. Added to that is the fact that individuals or collectives can be cruel to animals, deny them dignity and worth, etc. and still believe that they are indeed conscious, and, to a degree, even intelligent.

So there’s a danger of conflating Descartes’ own technical and somewhat philosophically-arcane position on animals, with what Western “folk” believed. [See Note 1.]

To sum up. It’s hard to believe that there are any strict Cartesians nowadays. Thus, anthropocentrism itself may not be as big a problem as Seth believes.

It depends…

It depends on the philosophical ism being discussed, and also on what particular philosophers and scientists have to say on the matters in hand.

All that said, and as ever, much of this depends on what’s meant by the words “consciousness” and “intelligence” in the first place!

Thus, if the word “intelligence” is taken (as Seth puts it) “anthropocentrically”, then we’re driven to a Cartesian position which denies most (even all?) animals consciousness. From a different direction, we’re also driven to a position which laughs at panpsychists for supposedly believing that stones or atoms “think”. [See my ‘Sabine Hossenfelder Doesn’t Think… About Panpsychism’.]

Thermostats, Computers and Animals

Against both anthropomorphism and Cartesianism, there’s a passage from Anil Seth (which doesn’t express his own position) which actually widens the domain of consciousness, rather than narrows it. However, even though it widens it to all animals, it doesn’t also do so to computers, robots, thermostats, etc. Seth writes:

“For some people — including some AI researchers — anything that responds to stimulation, that learns something, or that behaves so as to maximise a reward or achieve a goal is conscious.”

Thus, according to this account, thermostats may not be deemed to be conscious because they don’t — again, depending on definitions — learn anything, and they don’t do anything to maximise a reward or achieve a goal

However, thermostats do respond to stimulation.

In any case, Seth’s account of consciousness above can be applied to literally all animals.

So is this an account which leaves out intelligence?

Or is responding to stimulation, learning something, and behaving so as to maximise a reward or achieve a goal not only constitutive of being conscious, but also of being intelligent?

Indeed, does it matter?

Which sets of data and arguments could possibly help us decide which is the best term (i.e., “intelligent” or “conscious”) to use here?

Still, Seth warns us that consciousness and intelligence are often juxtaposed. Yet they’re sometimes being juxtaposed in such a way (i.e., in the passage above) so as to work against anthropocentrism, and toward a widening of the domains of both intelligence and consciousness.

According to Seth, Chalmers and Penrose, Functionalism Fails

Anil Seth argues that functionalism is at the heart of both AI theory generally, as well as being at the heart of the belief that intelligence (or at least “intelligent behaviour”) and consciousness are intrinsically connected.

So first things first.

According to Seth’s account of functionalism,

“what matters for consciousness is what a system *does*”.

What does the relevant system do?

In this case, it “transforms inputs into outputs”. And if it does that “in the right way”, then, according to some functionalists, then “there will be consciousness”.

Thus, on this definition, a thermostat must be conscious — at least to some degree…

The Australian philosopher David Chalmers seems to agree.

In his ‘What is it like to be a thermostat?’, Chalmers writes:

[Thermostats] take an input, perform a quick and easy nonlinear transformation on it, and produce an output.”

What Chalmers adds to this story, however, is the important notion and reality of information. [See here.] However, let’s just say here that ants and viruses take inputs too, and they perform quick and easy nonlinear transformations on such inputs, to produce outputs. (It can be doubted that a thermostat’s transformations are, in fact, nonlinear.)

So what about a paramecium and its own inputs and outputs?

The mathematical physicist Roger Penrose writes:

“For she [a paramecium] swims about her pod with her numerous tiny hairlike legs — the cilia — darting in the direction of bacterial food which she senses using a variety of mechanisms, or retreating at the prospect of danger, ready to swim off in another direction. She can also negotiate obstructions by swimming around them. Moreover, she can apparently even learn from her past experiences [].”

Chalmers has referred to proto-experiences when discussing a thermostat (although not actually quoted doing above), and here we have Penrose using the unadulterated word “experiences” in reference to a paramecium.

As already stated, functionalists stress… well, functions. In other words, they stress what systems do. Thus, functionalists also stress substrate-independence. Yet, as just mentioned, Penrose is actually stressing biology itself.

In more detail.

Penrose believes that ants are one step ahead of computers and other artificial entities when he claims that the

“actual capabilities of an ant seem to outstrip by far, anything that has been achieved by the standard procedures of AI”.

The molecular biologist, biophysicist and neuroscientist Francis Crick also argued that the study of consciousness must be a biological pursuit.

According to Crick, psychologists (as well as philosophers) have

“treated the brain as a black box, which can be understood in terms merely of inputs and outputs rather than of internal mechanisms”.

The biologist, neuroscientist and Nobel laureate Gerald Edelman (1929- 2014) is also said to have held the position that the mind

“can only be understood from a biological standpoint, not through physics or computer science or other approaches that ignore the structure of the brain”.

All this brings us to the more (philosophically) detailed account of these issues as offered to us by the American philosopher John Searle.

John Searle on Biological Brains

Searle’s basic position is that if functionalists ignore the physical biology of brains and nervous systems, and, instead, focus exclusively on syntax, computations or functions, then that will lead to a kind of 21st century mind-body dualism. What he means by this is that in much AI theory and functionalism there’s a radical disjunction created between the actual physical reality of the brain and how we explain — or account for — consciousness (as well as for intentionality and the mind generally).

Searle himself wrote the following words:

“I believe we are now at a point where we can address this problem as a biological problem [of consciousness] like any other. For decades research has been impeded by two mistaken views: first, that consciousness is just a special sort of computer program, a special software in the hardware of the brain; and second that consciousness was just a matter of information processing. The right sort of information processing — or on some views any sort of information processing — would be sufficient to guarantee consciousness. [] it is important to remind ourselves how profoundly anti-biological these views are. On these views brains do not really matter. We just happen to be implemented in brains, but any hardware that could carry the program or process the information would do just as well. I believe, on the contrary, that understanding the nature of consciousness crucially requires understanding how brain processes cause and realize consciousness. ”

He continued:

“Perhaps when we understand how brains do that, we can build conscious artifacts using some nonbiological materials that duplicate, and not merely simulate, the causal powers that brains have. But first we need to understand how brains do it.”

To Searle himself, it’s mainly about what he calls “causal powers”.

This refers to the fact (or possibility) that a certain level of complexity is what’s required (as against panpsychism, and perhaps against integrated information theory too) to bring about those causal powers which are necessary for consciousness (as well as for intentionality and mind generally).

Despite that, Searle never argues that biological brains are the only things capable — in principle — of bringing about consciousness and (genuine?) intelligence. (Therefore, in Searle-speak, semantics.) He only argues that biological brains are the only things known to exist which are complex enough to do so. Again, Searle doesn’t believe that only brains can give rise to minds. Searle’s position is that only brains do give rise to minds. In other words, Searle is emphasising an empirical fact. However, he’s not also denying the logical, metaphysical and even natural possibility that other things can bring forth consciousness and intelligence.


Note:

It can be argued that Descartes’ view on animals filtered down to the folk. On the other hand, it can also be argued that Descartes himself latched onto preexisting (Christian?) views about animals.

It’s also worth noting here that Descartes stressed mind, not consciousness itself.

(*) See my ‘Neuroscientist Anil Seth Links Panpsychism To Integrated Information Theory’ and ‘Anil Seth: Consciousness ≠ Integrated Information’.





Sunday, 1 September 2024

Neuroscientist Anil Seth Links Panpsychism To Integrated Information Theory

 

(i) Introduction
(ii) Integrated Information Theory and Panpsychism
(iii) David Chalmers on Conscious Thermostats
(iv) Is Information Fundamental?

Introduction

Firstly, let me quote the British neuroscientist Anil Seth offering his broad view of panpsychism. He writes:

“Panpsychism is the idea that consciousness is a fundamental property of the universe, alongside other fundamental properties such as mass/energy and charge; that is present to some degree everywhere and in everything.”

Seth then immediately tackles the subject of “silly” panpsychism when he continues with the following words:

“People sometimes make fun of panpsychism for claiming things like stones and spoons are conscious in the same sort of way that you and I are, but these are usually deliberate misconstruals designed to make it look silly. There are more sophisticated versions of the idea [].”

More relevantly to this essay, an important way of linking panpsychism to integrated information theory (IIT) can be seen when comparing Seth’s words that

[p]anpsychism is the idea that consciousness is a fundamental property of the universe”

to his words (written elsewhere) that integrated information theory

“also implies that *information itself* exists — that it has some definite ontological status in our universe — a status like mass/energy and electrical charge”.

Here we have two rival views as to what is fundamental in the universe…

Or are they rivals?

Integrated Information Theory and Panpsychism

Integrated information theorists don’t believe that all (physical) entities (or “systems”) instantiate consciousness. The main reason for this is that not every… thing has the required level of integrated information to be conscious.

In broad and general terms, most integrated information theorists (along with Anil Seth himself) emphasise complexity and integration. Thus, unlike panpsychists, they also focus almost entirely on biological brains.

Seth offers his readers his own technical take on integration, information and complexity when he cites the example of the molecules in a gas. He claims that this

“kind of system has maximum information — maximum randomness — but shows no integration at all, because every element is independent from each other”.

On the other hand, we also have Seth’s other example of a “crystal lattice”. In this case, “all the elements do exactly the same thing”. Thus, here we have

“maximum integration, but almost no information, because there are very few possible states that the system can be in”.

Interestingly enough, the physicist Max Tegmark also mentions gases. And he too uses integrated information theory to distinguish conscious matter from other physical systems such as gases, liquids and solids. Indeed, he backs up both Anil Seth’s and Giulio Tononi’s position when he tells us that consciousness is dependent upon “information, integration, independence, dynamics, and utility principles”.

Now let’s tackle panpsychism more broadly.

Panpsychism

The problem with arguing that consciousness (or experience) is integrated information, and that information is everywhere, is that even very simple objects (or systems) must instantiate (or contain) a degree of (integrated?) information. Therefore, such basic objects must also have a degree of consciousness. Or, in the language of integrated information theory, all such objects (or systems) must have a “φ value”.

Perhaps, then, we’ve entered the territory of panpsychism here.

Not surprisingly, Giulio Tononi’s position does touch on panpsychism — even if his position isn’t identical to that of panpsychists. That said, he’s written conflicting things about this particular philosophical ism.

For example, Tononi wrote the following words:

“Unlike panpsychism, however, IIT clearly implies that not everything is conscious.”

Despite all that, and to repeat, IIT has it that even basic objects have a nonzero degree of Φ. This would mean that consciousness is almost everywhere — if only to a rudimentary degree (as with the “proto-experience” of panpsychists).

In any case, the argument that IIT is not a kind of panpsychism is at odds with what the philosophers David Chalmers and John Searle believe. They do take IIT to be a form of panpsychism. [See here and here.] What’s more, the German-American neurophysiologist and neuroscientist Christof Koch (Giulio Tononi’s co-worker) has even claimed that IIT is a “scientifically refined version” of panpsychism.

In any case, if we accept a strong — indeed a necessary — link between consciousness and integrated information, then an ant or a virus must have a “non-zero degree of consciousness”

Indeed, this could be true of a thermostat too!

David Chalmers on Conscious Thermostats

In his ‘What is it like to be a thermostat?’, David Chalmers writes:

[Thermostats] take an input, perform a quick and easy nonlinear transformation on it, and produce an output.”

What does Chalmers mean by the word ‘information’ when it comes — specifically — to a thermostat?

Basically, heat and cold (i.e., all variations in temperature in a given environment) can be seen as bits of information. However, are heat and cold information for a thermostat? More relevantly, does that even matter in this IIT-panpsychism context?

Or is it the case that the actions (i.e., cases of processing) which are carried out by the thermostat constitute information? Alternatively, perhaps it’s the physical nature of a thermostat (its mechanical and material innards) that constitutes its information.

In terms of the thermostat at least, surely information is information-for-us, not information for the thermostat itself. After all, a thermostat responds to changes in temperature because we’ve designed it to do so…

Nonetheless, whatever a thermostat is doing (even if designed), it’s still doing. That is, the thermostat is acting on changes in temperature. (When it’s hot, it does one thing. And when it’s cold, it does another thing.)

Thus, does a thermostat have (to use John Searle’s term) as-if information? Or does it have real (first-order) information? In other words, does the fact that a thermostat is designed by human beings automatically stop it from having experiences which are themselves determined by its informational innards and/or nature?

To move away from thermostats.

Does the fact that a computer (or robot) is designed by human persons — and created out of synthetic materials — create any necessary or automatic problems for artificial consciousness?

After all, humans are also — in a strong, if metaphorical, sense — designed by their DNA, and we certainly have experiences.

Thermostats are designed by human persons: do the former have experiences too?

David Chalmers also tackled (way back in 1996) the case of the artificial neural network NETtalk, and asked us whether or not it does (or could) instantiate conscious experience. He wrote:

“NETTALK, then, is not an instantiation of conscious experience; it is only a model of it.”

John Searle had something to say on thermostats too:

“I say about my thermostat that it perceives changes in the temperature [].”

This means that this is Searle’s way (as with Daniel Dennett) of taking an intentional stance towards thermostats. That is, we can treat them — or take them — as being intentional objects. We can also take them as as-if intentional objects.

On Searle’s view, then, the as-if-intentional nature of thermostats is derived from the fact that these inanimate objects have been designed to (as it were) perceive, know and act. However, this is only as-if perception, as-if knowledge and as-if action. (All this involves as-if information too.) Thus, such things are dependent on human perception and human knowledge. Yet such as-if perception, as-if-knowledge and as-if-action require real — or “intrinsic”— intentionality.

This must mean that Chalmers’ thermostat has a degree of as-if intentionality too, which is derived from (our) intrinsic intentionality.

Now let’s jump from conscious thermostats to conscious nations.

The following passage is Anil Seth’s reference to the China brain thought experiment:

“The China brain thought experiment considers what would happen if each member of the Chinese nation were asked to simulate the action of one neuron in the brain, using telephones or walkie-talkies to simulate the axons and dendrites that connect neurons. Would this arrangement have a mind or consciousness in the same way that brains do?”

So, according to Seth, at the other end of the scale it’s also the case that “an entire country [could] be conscious”. What’s more, if that were the case, then we’d also need to decide if “one country [could] be more conscious than another”.

Let’s now move on from integrated information, and simply tackle information itself.

Is Information Fundamental?

Anil Seth discusses the idea that integrated information theorists see information as being fundamental. He writes:

[A]nother weirdness of IIT is that by making the strong claim that PHI *is* consciousness, IIT also implies that *information itself* exists — that it has some definite ontological status in our universe — a status like mass/energy and electrical charge.”

There are problems with this position.

The science writer Philip Ball quotes the words of the physicist Christopher Fuchs to express some of these problems. Firstly, Ball writes:

“Christopher Fuchs sees these insights as a necessary corrective to the way quantum information theory has tended to propagate the notion that information is something objective and real — which is to say, ontic."

Ball then quotes Fuchs directly:

“‘It is amazing how many people talk about information as if it is simply some new kind of objective quantity in physics, like energy, but measured in bits instead of ergs. You’ll often hear information spoken of as if it’s a new fluid that physics has only recently taken note of.’"

Finally, Ball sums up this issue with the following words:

“In contrast, [Fuchs] argues, what else can information possibly be except an expression of what we think we know?”

That passage can be read as arguing that stuff gives off information, rather than stuff actually being information in and of itself.

Yet this position conflicts with what some philosophers and physicists believe. That is, such people believe that information is in no way mind-dependent. Indeed, they believe that information is information regardless of minds, persons, observers, experiments, tests, etc.

So Fuchs is (at least partly) at one with the philosopher John Searle in rejecting this hypostatisation of information.

That said, information may well become (what Searle calls) information-for-us for such information-based physicists. Yet it’s still regarded as information even before it becomes information-for-us.

There is a midway position here, as expressed by the theoretical physicist Carlo Rovelli.

Rovelli writes very loosely about information here:

[T]he white ball ball in my hand is black. We’re dealing with physical facts, not mental notions. A ball has information, in this sense, even if the ball does not have mental states, just as a USB storage device contains information [].”

More directly on the theme of observers or scientists:

“But the effective way of continuing to exist in a changing environment is to manage correlations with the external world better, that is to say, information; to collect, store, transmit and elaborate information.”

Thus, information can be seen as being fundamental, and it can be tied to minds or observers too.

What’s more, some readers might have spotted that these passages from Rovelli tie in with Philip Ball’s earlier fundamental question:

[W]hat else can information possibly be except an expression of what we think we know?”

In other words, these commentators certainly don’t believe that information is (to use the words of Christopher Fuchs again)

“simply some new kind of objective quantity in physics, like energy”.

Saturday, 31 August 2024

Anil Seth: Consciousness ≠ Integrated Information

Integrated information theorists postulate a literal identity between consciousness and integrated information. The British neuroscientist Anil Seth rejects this identity.

The British neuroscientist Anil Seth puts the integrated information theory (IIT) position at its most simple when he tells us that

“on IIT information — *integrated* information, Φ — actually *is* consciousness”.

According to the neuroscientist Giulio Tononi, the mathematical measure of that integrated information (in a system) is symbolised by φ (phi).

It can be presumed that many people (at least the ones who think about this issue) would claim that consciousness (to use a metaphor) contains information. That is, a conscious state has (or it instantiates) informational content. However, is consciousness itself information?…

An integrated information theorist may now simply ask:

What is this consciousness which contains information?

Alternatively:

If you take the informational content of consciousness away, then what are you left with?

As just stated, IIT is an identity theory that postulates a literal identity between consciousness and integrated information…

But not so quick!

Giulio Tononi actually believes that consciousness doesn’t equal just any kind of information. However, any kind of information (embodied in a system) may be conscious — at least to some degree.

Of course, we need to know what information actually is…

But, for now, we still can’t simply say that

consciousness = information

Instead, we must say:

consciousness = integrated information

Anil Seth himself makes it clear that integrated information theorists are identity theorists when it comes to consciousness and integrated information.

For example, Seth claims that such theorists treat consciousness “like temperature”, which is “mean molecular kinetic energy”. Thus:

temperature = mean molecular kinetic energy

is syntactically similar to

consciousness = integrated information

Seth doesn’t accept the second identity above. However, he does see the theoretical and experimental importance of information and integration in consciousness studies. He simply doesn’t see their joint instantiation as being equal to, or identical with, consciousness.

Anil Seth as an (Old-Style) Identity Theorist?

Anil Seth also refers to the (old?) identity theory in the philosophy of mind. Or at least he does so tangentially and in passing.

Seth does so when discussing his “preferred philosophical position”, which is “physicalism”. Seth writes:

“This is the idea that the universe is made of physical stuff, and that conscious states are either identical to, or somehow emerge from, particular arrangements of this physical stuff.”

So, according to Seth, physicalists can be identity theorists, or they can embrace emergence (i.e., and still be physicalists).

In detail.

In a video debate with Donald Hoffman (see here), Seth also stresses his newfound interest in emergence and top-down causation. This hints at the fact that he actually opts for the “emerge from” (i.e., rather than the “identical to”) option when it comes to the relation between consciousness and physical stuff. (Can emergent features themselves be identical with physical stuff?)

So Seth not only discusses the literal identity of consciousness and integrated information (which he doesn’t accept), he also hints at his own identity between “conscious experiences” and “neural mechanisms”. He writes:

“They key move made by Tononi and Edelman was to propose that if every conscious experience is both informative and unified at the level of phenomenology, *then the neural mechanisms underlying conscious experience should also exhibit both these properties*.”

More explicitly:

“That it is in virtue of expressing both of these properties that neural mechanisms do not merely correlate with, but actually account for, core phenomenological features of every conscious experience.”

What are readers to make of Seth’s claim that “neural mechanisms do not merely correlate with, but actually account for, core phenomenological features of every conscious experience”?

Arguably, this isn’t an explicit expression of an identity relation between conscious experiences and neural mechanisms. After all, Seth does use the words “account for”. Thus, those who stress the neural mechanisms which “correlate” with the “phenomenological features of every conscious experience” also say that the former account for the latter.

So is Seth also arguing that these neural mechanisms actually are the phenomenological features of every conscious experience? In other words, are the latter identical to the former?

That said, there are various grammatical phrases which Seth uses which point to the fact that he believes that his own position is not an identity theory… of any kind.

For example, take Seth’s words “neural basis” (i.e., as found in the clause “they made claims about the neural basis of every conscious experience”). Thus, if x is the basis of y, then x and y can’t be one and the same thing. (Seth’s words “underlying mechanisms” may also rule out an explicit identity.)

Giulio Tononi on Integrated Information

We can cite Giulio Tononi again as an example of someone who believes that consciousness (or experience) simply is integrated information. Or, perhaps more accurately, he believes that consciousness is information as it’s processed and integrated by brains and, perhaps, non-biological systems.

Thus, if that’s a statement of identity, then can we invert it and say this? -

information = consciousness

As stated earlier, Tononi actually believes that consciousness doesn’t equal just any kind of information. However, any kind of information (embodied in a system) may be conscious — at least to some degree.

Technically, not only are systems more than their combined parts: those systems have varying degrees of “informational integration”. Thus, the higher the informational integration, the more likely that system will be conscious.

Mere Correlations!

Anil Seth is certainly unsatisfied by (as the popular phrase has it) “mere correlations”. Or, at the very least, he wants to offer his readers more than that. He writes:

“The deeper problem is that *correlations* are not *explanations*. We all know that mere correlation does not establish causation, but it is also true that correlation falls short of explanation.”

Instead, Seth wants something that will close the “explanatory gap between the physical and the phenomenal”.

But does all that mean that Seth is — again — offering us identities?

Seth continues:

“But if we instead move beyond establishing correlations to discover explanations that connect properties of neural mechanisms to properties of subjective experience…, then this gap will narrow and might even disappear entirely.”

The philosopher David Chalmers tackled this issue many years ago (i.e., in 1995). Indeed, he even christened it with a new technical name: “structural coherence”. Chalmers himself wrote:

“This is a principle of coherence between the *structure of consciousness* and the *structure of awareness*.”

Yet, later, Chalmers also notes the problems here:

“This principle reflects the central fact even though cognitive processes do not conceptually entail facts about conscious experience [and] not all properties of experience are structural properties.”

It also needs to be stressed that Chalmers was noting structural coherences between conscious states and what he calls the “structure of awareness”, not between Seth’s “subjective experience” and “neural mechanisms”.

Despite those differences, we can say that if x is coherent with y, then x and y still can’t be one and the same thing. Thus, again, we don’t have any literal identities here.