Sunday 27 August 2017

Kant, Implication and Conceptual Containment



On an old reading, the statement

A implies B

is taken to be true (or false) because

B “contains” or “involves” something that is also “in” A.

This is the standard Kantian view of implication (or, later, synonym-based analyticity). However, B can be the consequence of A without it “containing” or “involving” something that's common to A. How, then, would B be a consequence of A? In physical nature, A can cause B without sharing anything with B. Non physically, B can also be deduced from A without sharing anything with A. If all that's so, how does this deduction or implication actually come about?

(All this hints at both “relevance logic” and “material implication”; as well as at the sharing of “propositional parameters”.)

In terms of statemental implication, to imply something means that there's actually something about the statement which somehow contains the implication. That doesn't really explain the relation between the implication and the implied. Can there be causal implication, for instance? In what sense is the implied actually in the implication?

We can also ask what it means to say that “B is contained within A”? Quine accused Kant of speaking at a metaphorical level when talking about “containment”. Thus what non-metaphorical way have we of describing what's at issue here? (If A is simply an inscription or “syntactic form”, then of course it can’t contain B – it can’t really contain anything except itself.)

So A will demand content if it's to imply B. In that case, it all depends on what the symbol A stands for. Is it a concept, sentence, statement or a proposition? All these possibilities have content.

If the symbol A stands for the concept [politician], then what content would it have? Can we say that contained within the concept [politician] are the macro-concepts [human being] and [person]; as well as the micro-concepts [professional] and [Member of Parliament]? However, in a certain sense it's quite arbitrary to categorise certain concepts as micro-concepts and others as macro-concepts because that distinction will depend on the context.

However, we can ask within which context we can categorise [politician] as a micro-concept. There's a simple way to decide what is what. We can ask this question.

Is it necessary for a politician to be a person or a human being?

The answer is no. It's not logically necessary; though what's been said is empirically the case. (A robot, computer or alien could be a politician.)

Is it necessary for a human being or person to be a politician?

The answer is: Of course not! In this simple sense the macro-concepts encompass the micro-concepts. Of course there are yet higher levels of concept. For example, [biped] and [animal]. This would include the concepts [human being] and [person]. And there are yet higher-order concepts than that. For example, [living thing] and [organism]. This could go on until we reach the concepts [object], [thing], [entity], [spatiotemporal slice] and so on.

If A is taken to be a concept, then it may well have a huge amount of implicit and explicit content. It could imply all sorts of things. However, it's a strange thing to take A as simply something standing for a single concept. It's hard to make sense of a concept all on its own (as it were). We need to fill in the dots ourselves.

If A is a sentence, then things become a little clearer and not as broad-ranging. The sentence may of course include concepts; though such concepts - within a sentential framework - will be more finely delineated and circumscribed. Something will be said about the concepts contained and they may be contextualised.

To say that the concept [politician] implies the concepts [human being] and [person] just sounds strange. In a sense, the bare concept [politician] isn't actually saying anything. The idea of containment must be taken less literally in the case of A standing for a concept than when if A stands for statements, sentences, etc.  This parallels, to a small extent, Frege's “context principle”.



Tuesday 22 August 2017

Chalmers and the Evolutionary Point of Consciousness



Word Count: 2299

i) Introduction
ii) Evolution: Why Consciousness?
iii) Consciousness is Good for Us
iv) Frank Jackson's Warm and Heavy Coat
v) Neuroscience on the Point of Consciousness
vi) Evolution and Panpsychism
vii) Conclusion


The following piece doesn't tackle David Chalmers' well-discussed and well-known Hard Problem. That is, it doesn't attempt to find an answer to the question:

Why does the physical brain give rise to consciousness?

Instead, it asks us why we human beings - and other animals - needed consciousness in the first place. Thus we have this question:

From an evolutionary perspective, if the functions of the brain (which Chalmers often refers to) might have occurred “in the dark”, then why did we need (if we did need) - and why do we still need/have - consciousness?

The nature of the physical-consciousness link is only tangential to this issue. 

Evolution: Why Consciousness?

Is it really possible that experience/consciousness is truly gratuitous from an evolutionary – or from any – point of view?

Thus perhaps the obvious answer to David Chalmers' question

Why is the performance of these functions accompanied by experience?”

is that experience (or consciousness) may - or does - contribute to these functions. It adds something.

The most important point against the (possibly) adaptive nature of consciousness (i.e., how it may “increase fitness”) is an idea – often stressed by Chalmers himself - that consciousness doesn't (or may not) add anything to the brain functions which underpin it. In other words, such brain functions may achieve the same advantages for survival even if higher organisms didn't have consciousness. This possibility clearly ties in with Chalmers' fixation on zombies. That is, zombies have the same physical brains and mental functions as conscious human beings. However, they're also “devoid of mentality”.

So does consciousness or experience give us an evolutionary advantage? Or is consciousness/experience simply a redundant byproduct of other things which did indeed give us an advantage? (Steven Jay Gould called this kind of phenomenon a “spandrel” - an unintended byproduct of something else.) Can we even conceive of the possibility that something that's so useful, immediate and particular as consciousness is simply a byproduct of something else?

We can.

Perhaps we shouldn't see the divide between x's being advantageous and x's being disadvantageous in such absolute terms. After all, it's also been said that the brain's big size and weight weren't (really?) conducive to human survival. (More precisely, the human brain is much bigger than all the other brains in the mammalian species.) Nonetheless, a big consciousness and a big set of cognitive skills are also a result of a big and heavy brain.

In addition, if consciousness is an accidental byproduct (evolutionary products, strictly speaking, are also accidental) of other features which were indeed selected for by evolution, then perhaps it can't be an adaptation either. Nonetheless, it may be what's called an “exaptive” phenomenon. Consciousness might have been an exapation  of other things which were indeed selected for by evolution. As just stated, the brain's size and physical nature were selected for by evolution. And one consequence of brain size and its physical/chemical arrangement was consciousness/experience.

Despite all this talk of consciousness being a byproduct of something else (as well as talk of consciousness being pointless - if only in evolutionary terms), we can also take a very different position on all this, as Peter Carruthers does.

It's not surprising that Peter Carruthers (i.e., as a philosopher) should explicitly say that consciousness itself is conducive to survival and it is so in a strictly philosophical sense. Carruthers has argued that consciousness allows us – and other beings – to "distinguish appearance from reality". Now what could be more conducive to survival than that useful philosophical skill?

We can give a quick and easy example of this appearance-reality problem for survival.

Take the case of a very-hungry creature which was able to work out whether or not the water he sees in the distance is a mirage. If this creature hadn't had this evolutionary advantage, then it might have wasted valuable time and energy traipsing towards non-existent water. It might have even died in the process.

If we get back to Carruthers' philosophical Sahelanthropus man and other philosophical creatures. Many philosophers have also said that “accurate representations” (Richard Rorty) - and even truth itself - were irrelevant when it came to survival. Though surely this can't apply to Carruthers' mirage example!

Consciousness is Good for Us

It can be said that not all the brain's information-processing "goes on in the dark” because if it did so, then we wouldn't have the added advantages that experience (or consciousness) give to these examples of information-processing. That means that Chalmers' “inner feel”, for example, also contributes because it too is a property of experience/consciousness.

The fact still seems to be that because all of this could occur in the dark, then we need to explain why it doesn't. Though, as already stated, what if what occurs in the dark is at a lesser (evolutionary) level than what occurs in the light? Indeed isn't that obviously the case? With, say, microorganisms or even spiders, everything does (or may) occur in the dark. With adult human beings, that's simply not the case. And that's why we live and experience at a higher evolutionary level than microorganisms or spiders.

Despite the above, Chalmers himself writes:

This is not to say that experience has no function. Perhaps it will turn out to play an important cognitive role.”

I would say that it obviously does; especially from an evolutionary perspective. There are indeed epiphenomenal attributes (or hangers-on) when it comes to evolution. However, why should we believe that consciousness/experience itself fits that bill? I don't see how experience could be like, say, a human's little toe: i.e., something that serves no purpose. Having said that, just as all human beings have experiences, so too do all humans have little toes.

Frank Jackson's Warm and Heavy Coat

As we've seen, it's certainly true that consciousness could be an adjunct to physical features which were “conducive to survival” - without consciousness itself being conducive to survival.

This is Frank Jackson (in his well-known paper 'Epiphenomenal Qualia') on the subject of bears.

The Theory of Evolution explains this (we suppose) by pointing out that [bears] having a thick, warm coat is conducive to survival in the Arctic. But having a thick coat goes along with having a heavy coat, and having a heavy coat is not conducive to survival. It slows the animal down.

“… Having a heavy coat is an unavoidable concomitant of having
a warm coat (in the context, modern insulation was not available), and the advantages for survival of having a warm coat outweighed the disadvantages of having a heavy one. The point is that all we can extract from Darwin's theory is that we should expect any evolved characteristic to be either conducive to survival or a by-product of one that is so conducive.”

There are of course many other things which are deemed to be byproducts of evolution. Perhaps a more relevant example is the the blind spot in the retina. In this case, the blind spot wasn't an adaptation of the retina: it was simply a byproduct of the way the retinal axons were/are wired.

There are many other positions on the issue of consciousness being a byproduct of something else. Steven Pinker (in his How the Mind Works), for example, argues that consciousness is a byproduct of our our “evolved problem-solving abilities”.

So clearly consciousness might have been a byproduct of something else. However, it's certainly not in the same ballpark as Frank Jackson's warm-and-heavy coat. A heavy coat - in and of itself - was disadvantageous for a bear. A warm coat wasn't. When in comes to consciousness, we can say that whatever it was that was responsible for consciousness - and from which consciousness is a byproduct - was also conducive to survival. However, it might have been the case that the byproduct that is consciousness was also conducive to survival.

To return to Frank Jackson's warm-and-heavy coat example. Jackson concluded the quoted passage above by tying all this to qualia: He wrote:

The phenomenalist holds that qualia fall into the latter category [i.e., an epiphenomenal byproduct or evolution].”

That is, qualia are (or may be) a byproduct of evolution. It also follows from this that they are – or may be - epiphenomenal. However, that doesn't necessarily follow. Something that's an evolutionary byproduct needn't also be epiphenomenal. Or, in our case, it needn't also be non-conducive to survival. Sure, there's a mountain of philosophical arguments which have stated that Jackson's qualia - and even consciousness itself - are epiphenomenal; though this particular argument-from-evolution doesn't establish that.

Neuroscience on the Point of Consciousness

It may help to give some neuroscientific examples of this nonconscious/consciousness opposition (or distinction).

We can say that actions (or movements) which are related to reflexes, vegetative functions, low-level perceptual analyses, unconscious motor programs, etc. all occur at the non-conscious level. However, clearly there are also things which are products of consciousness: perception, language, cognition, integration, memory, etc. However, even some of these examples could still occur without consciousness. Thus we're left with two conclusions:

i) Nonconscious processes occur.
ii) There are conscious processes; though they could be - or might have been - nonconscious processes.

A more down to earth – or neuroscientific – explanation of the adaptive point of consciousness is the position that conscious states integrate neuronal activities and processes. This has been called “the integration consensus”. A more technical variant on this has been offered by Gerald Edelman. His “dynamic core hypothesis” is about the “reentrant connections” which link different areas of the brain in a “massively parallel manner”.

Nonetheless, the autonomy-from-consciousness problem arises here too. It's certainly the case that there are kinds of information that are integrated without also being conscious. However, simply because there is such nonconscious integration, that doesn't automatically mean that conscious integration is ruled out of the picture. Clearly the two can stand side by side. It's still the case, however, that nonconscious processing creates philosophical and scientific problems for the philosopher and scientist. Nonetheless, like the possibility of zombies, these problems don't - in and of themselves - logically or metaphysically rule out consciousness.

Thus it's certainly a fact that we don't always know which kinds of information are integrated by consciousness and which aren't. However, that doesn't have an impact on consciousness itself or even on its role in helping us survive. 

Yes, there are indeed nonconscious functions or processes. So what?...

Evolution and Panpsychism

Chalmers himself now fully accepts (I believe) some form of panpsychism. 

The panpsychist position doesn't have it that consciousness/experience evolved from something else. It's been with us (or with the universe) since the very beginning - perhaps since just after the Big Bang. Thus the advantageous/not-advantageous-to-survival distinction doesn't seem to be that relevant in the case of panpsychism.

Of course it can also be said that even if panpsychism is true (and that experience/consciousness has always been with us), then it may still be the case that experience wasn't advantageous to survival. That is, panpsychism doesn't seem to tie in very well with evolutionary theory; at least in this respect. If experience has always been with us (or with the universe), then it can't have evolved to have given us an evolutionary advantage.

Though is it that simple?

Even if experience has always been with us, it might still have been the case that evolution got to work (as it were) on it. What I mean by this is that evolution changes biological matter. According to panpsychism, all the parts of biological – as well as inanimate! - entities have consciousness/experience. However, evolution impacted on the arrangements of biological entities so as to make some examples which were better equipped to survive than others. Thus those pockets of biological matter which resulted would still have included phenomenal properties - all the way down. Yet it might also have been the case that the new arrangements of matter (care-of evolution) brought phenomenally-basic entities into new alignments which - being more complex biologically and therefore phenomenally - made them better equipped to survive.

Thus, depending on how we look at this, there are two possible responses to this panpsychism-evolution situation:

i) If phenomenal properties have always been with us, then that fact alone is hard to square with evolution.
ii) Even if experience has always been with us, then there's nothing to rule out the possibility that evolution itself might have had an impact on phenomenal properties (non-teleologically, of course) by making them the parts of more complex biological entities. This would have made the aforesaid biological entities more phenomenally complex too.

Conclusion

To repeat. 

It's certainly possible that all the brain functions and processes (which David Chalmers often refers to) might well have occurred - and could still occur - “in the dark”. However, not all of them do occur in the dark. (As Philip Goff puts it: Consciousness is a primary "datum in its own right"; which - by definition - can't be ignored.)

As to how experience/consciousness is added to - or comes from - these functions (or from the brain itself), then that seems to be a different question. And it is indeed problematic and difficult.

However, if the evolutionary approach sketched above is broadly correct, then Chalmers' Hard Problem doesn't (or perhaps shouldn't) bring forth the following why-question:

Why does the physical brain give rise to consciousness?

Instead it simply raises (or perhaps should raise) the following how-question:

How does the physical brain give rise to consciousness?


Friday 18 August 2017

Chalmers: Structure + Phenomenal Properties = Consciousness






David Chalmers puts the structuralist position (in the philosophy of physics) when he writes:


“[I]t is often noted that physics characterizes its basic entities only extrinsically, in terms of their relations to other entities, which are themselves characterized extrinsically, and so on.”

However, he adds a vital conclusion to all that. Namely: “The intrinsic nature of physical entities is left aside.”

At an intuitive level, Chalmers seems to be bang on. Relations - so it's said - must depend on relata. Indeed the very word “relations” is surely relative to things or to the phenomena which sustain the relations. (There's tons of stuff on this issue.)

This fixation on relations or structure is the case for two other reasons.


i) The words “intrinsic nature” have come to be almost synonymous with Kant's word “noumenon” (at least in certain contexts). According to Kant, noumena can't be known, experienced or even - in certain respects - commented upon. That may mean that an intrinsic nature/property may also be like Wittgenstein's “wheel that can be turned though nothing else moves”. That is, it may be “a difference which doesn't make a difference”.

ii) Intrinsic properties/natures simply don't exist. (That - I think - is the ontic structural realism position.)

James Ladyman and Don Ross, for example, quote Frank Jackson saying that “we know next to nothing about the intrinsic nature of the world”. Indeed we “know only its causal cum relational nature”.

Yet ontic structural realists also acknowledge the appeal of noumena. Ladyman and Ross write:


“[A]n epistemic structural realist may insist in a Kantian spirit... there being such objects is a necessary condition for our empirical knowledge of the world.”

You can sum this up with a simple Kantian question:


If there are no noumenal objects (which ground our representations, models, theories, etc.), then what's it all about?

Even if our representations, models, theories, "posited objects", etc. don't somehow represent objects, properties, conditions, etc. (or if we didn't have any “noumenal grounding” in the first place), then surely we'd have precisely nothing. As Ladyman and Ross put it (almost quoting Kant word-for-word):


“[T]here being such objects is a necessary condition for our empirical knowledge of the world.”

Or as Chalmers himself puts it:


“[O]ne is left with a world that is pure causal flux (a pure flow of information) with no properties for the causation to relate.”

So, again, we certainly don't mirror nature or objects. We may not even represent nature/objects; though we must capture something.

This is where Ladyman and Ross reply: Yes, we capture structure. That's why Ladyman and Ross make what can be seen as the obvious conclusion when they write:


“[W]e shall argue that in the light of contemporary physics... that talk of unknowable intrinsic natures and individuals is idle and has no justified place in metaphysics. This is the sense in which our view is eliminative...”

In a certain sense, even Kant realised that his noumena were “idle”. Yet he also believed that noumena are necessary in terms of metaphysics and important in terms of human "reason". In his Critique of Pure Reason, Kant wrote:


“[T]he concept of a noumenon is necessary, to prevent sensible intuition from being extended to things in themselves, and thus to limit the objective validity of sensible knowledge.

To put the case very simply. There are two main positions which one can adopt here:


i) There are indeed objects (or noumena); though we can never have access to them.

ii) If we can't have access to objects as they are “in themselves”, then why not drop such objects completely from the picture?

It can be said that ii) follows from i); though it can't be said that ii) logically follows logically from i).

Chalmers also states the structuralist conclusion – just mentioned - when he says that “[s]ome argue that no such intrinsic properties exist”. However, Chalmers isn't happy with that. He states that if the structuralist position is correct,


“then one is left with a world that is pure causal flux (a pure flow of information) with no properties for the causation to relate”.

Prima facie, that seems to be correct. However, there's also a response to that.

Two positions against intrinsic properties/natures have only just been highlighted above. They're relevant here too. Thus:


i) There may be something that accounts for the “causal flux” which Chalmers mentions. Though it's noumenal. Mathematical physics, on the other hand, can only describe and explain that causal flux in terms of structures, models, relations, etc.


ii) Nothing underpins the causal flux.

As just stated, Chalmers accepts what he - and many other philosophers - call “intrinsic properties”. (Or at the very least he accepts the possibility that such properties exist.) However, that's just the beginning. He also writes:


“If one allows that intrinsic properties exist, a natural speculation given the above is that the intrinsic properties of the physical—the properties that causation ultimately relates—are themselves phenomenal properties. We might say that phenomenal properties are the internal aspect of information.”

Of course it's a huge jump from the acceptance of intrinsic properties to seeing those properties as being phenomenal. However, because Chalmers ties the phenomenal to information, then perhaps that jump isn't quite so huge. On certain readings of the word “information”, it's almost by definition true that physical properties (or structures) will contain information. Thus, on Chalmers' own reading, if they contain - or are - information, then they must be phenomenal in nature.

Chalmers further augments his case by talking about causation. He writes:


“This could answer a concern about the causal relevance of experience — a natural worry, given a picture on which the physical domain is causally closed, and on which experience is supplementary to the physical. The informational view allows us to understand how experience might have a subtle kind of causal relevance in virtue of its status as the intrinsic nature of the physical.”

If physical intrinsic properties are informational, then that gives the phenomenal a causal role. That would indeed solve the problem of the position that “the physical domain is causally closed”. Such phenomenal (on intrinsic) properties would be casual and therefore part of the “physical domain”. More relevantly, experience (or consciousness) would be part of the physical domain.

Nonetheless, Chalmers doesn't commit himself entirely to these “suggestive” speculations. He finishes off by saying that this


“metaphysical speculation is probably best ignored for the purposes of developing a scientific theory, but in addressing some philosophical issues it is quite suggestive”.

At least Chalmers confesses that all this stuff is “speculation”. Yet speculation isn't automatically a bad thing. It's the lifeblood of much physics and also plays an important part in the other sciences.

Thus it's worth asking Chalmers exactly why he believes that these speculations are “probably best ignored for the purposes of developing a scientific theory”. The answer to that is that it's probably because there's no experimental or observational evidence for them. Technically, it's by definition that intrinsic properties are beyond observation/experimentation and therefore also beyond the domain of physics. (Though what of the just-mentioned vital role of speculation in physics?) That's why such properties are rejected by ontic structural realists and other kinds of structuralist in the philosophy of physics.

Example: The Structure of Silicon and Neurons

When Chalmers compares silicon chips to neurons, the result is very much like a structuralist position in the philosophy of physics; though one applied to experience/consciousness and the brain.

What matters isn't entities: it's (their?) “patterns of interaction”. They create “causal patterns”. Thus neurons have causal interrelations. Silicon chips suitably connected to each other also have causal interactions and interrelations. Perhaps both these sets of causal interactions and interrelations can be the same. That is, they can be structurally the same; though the physical things which bring them about are different (i.e., one is biological the other is non-biological).

Though what if the material “substrate” does matter? If it does, then we'd need to know why it matters. If it doesn't, then we'd also need to know why.

Biological matter is certainly very complex. Silicons chips are't that complex. (Or are they?) Remember here that we're matching individual silicon chips with individual neurons: not m/billions of silicon chips with the the billions of neurons of the entire brain. However, neurons, when taken individually, are also highly complicated. Individual silicon chips are much less so. However, all this rests on the assumption that complexity - or even maximal complexity - matters to this issue. Clearly in the thermostat or single-celled organism case at least, complexity isn't a fundamental issue or problem. Simple things exhibit causal structures and causal processes; which, in turn, determine both information and - according to Chalmers - very simple phenomenal experience.

David Lewis on Intrinsic Properties




It can be said that David Lewis's picture is very different to the various structuralist positions found in the philosophy of physics. In Lewis's view, not only are things (or “space-time points”) paramount, so are their “intrinsic properties”. This may square well with David Chalmers' position not only on the ontological existence of the phenomenal/experience; but also of its vital importance and omnipresence.

In any case, some metaphysicians tell us that there's a difference between properties which objects have independently of any external factors acting upon them (i.e., intrinsic properties) and properties which are deemed to be the way they regardless of what's external to them (i.e., essential properties). In Chalmers's scheme, intrinsic properties are very relevant precisely because they're phenomenal and therefore may/do have a role to play – at least in human consciousness (if not in atoms or thermostats).

David Lewis also defended intrinsic properties in a way which makes them things eminently unsuitable for structuralist accounts of phenomena. His position also helps the panpsychist - or Chalmers' - case.

Lewis also cites “internal structure” as being important to objects. Yet if entities are defined in terms of their relations, then surely they must also be (partly) defined in terms of extrinsic properties. Thus Lewis's internal structures may also need to be (partly) defined - or even constituted - by external relations or extrinsic properties. It can now be asked what would be the point of a Lewisian internal structure if it's primarily a crutch (or framework) for intrinsic properties which have no such relations to external factors.

It can also be said that internal structures determine relations and therefore also determine extrinsic properties. Then again, it can equally be said that external relations (or extrinsic properties) determine internal structures (or intrinsic properties). Here again the boundaries between what's intrinsic and what's extrinsic seems to be somewhat blurred.

Finally, there may not be a philosophical - or even a scientific - clash between the existence of intrinsic properties and the accounts of phenomena in terms of structure. This would work well for Chalmers' intrinsic phenomenal properties.

Thus what about the category of “intrinsic relations”?

Intrinsic relations are said to determine - or even constitute - objects. In other words, they're fundamental to the objects which have them. Thus, in terms of consciousness or experience, it's the brain which “has” or contains these phenomenal and intrinsic relations. Experience/consciousness may therefore be a fundamental property; rather than an emergent (or a non-physical) property.

Wednesday 9 August 2017

Do philosophy students use more drugs than other students?



The Guardian journalist, Stuart Jeffries, opened his piece ('Why philosophy students do the most drugs') with these words:

The Tab's [“... a university news network run by students who like being first”] survey of more than 5,000 students at 21 British universities reveals that 87% of philosophers polled had taken drugs, compared with 57% of medical students.”

Isn't this just a case of philosophy students compared to a single other group of students – medical students? So what about, say, drama, politics or sociology students? Who knows, perhaps this journalist chose medical students precisely because their figure for drug-use is so low. Nonetheless, the article's sub-heading does state the following:

Nearly 90% of them [philosophy students] have taken drugs, a higher proportion than in any other discipline, according to a poll of 21 UK universities.”

However, these figures aren't actually backed up in the article itself. This isn't helped by the fact that the Guardian link to The Tab article/survey is dead. And the only relevant article (from The Tab) I could find is called 'Revealed: Which uni takes the most drugs'.

Take this jump from the word “philosophers” (presumably he means philosophy students) to the word “students”:

Perhaps James's drug experimenting is inspiring today's philosophers: 45% of students polled claimed to have taken laughing gas. Or perhaps not – 68% had taken cannabis.”

Mr Jeffries also tells us that an Oxford maths student took MDMA, ketamine and laughing gas, and said: "I thought I was Godzilla." So what? That could have been true of any student and indeed of any young person.

Jeffries himself says that the research on this is rubbish (not his own word). For example, he writes:

Until a cross-referencing of which types of students favour what kind of drugs, we are lost in a world of diverting speculation.”

The writer also comments on the “discrepancy” of the research in that

The Tab's editors, sensibly, say the survey should be taken with a pinch of salt since respondents are self-selecting”.

Though he doesn't tell us why he personally chose to compare philosophy students to medical students. He writes:

Why this discrepancy? Is it because philosophy is easier than medicine and thus offers more recreational downtime? Really? Is grasping the Kantian noumenon less demanding than dissecting corpses?”

He does compare students with each other; though not with philosophy students (with the exception, as I said, of medical students). In addition, no figures are given. For example, he asks: “Why would a higher proportion of business administration students than lawyers claim to be drug users?”

So why did Jeffries single out philosophy students at all? There's no clear evidence in this piece why he should have done so. (Other than the possible fact that he may be vaguely interested in philosophy.)

Being a journalist, Stuart Jeffries also tries too hard to be hip and funny. For example, what the hell does the following mean? -

Another theory is that philosophy – more than any other intellectual discipline (with the possible exception of a level three plumbing NVQ) – requires one to recalibrate the portals of one's consciousness in order to get one's intellectual freak on.”

Sorry for being a straight; though I can make neither head nor tail of that.

This is bad journalism and even worse philosophy. He also writes:

Perhaps James's drug experimenting is inspiring today's philosophers: 45% of students polled claimed to have taken laughing gas.”

Really? I never knew that William James took nitrous oxide1. I doubt that many students do either – except for William James aficionados. However, James did report this - according to Stuart Jeffries - in his The Varieties of Religious Experience. Indeed this drug helped "stimulate the mystical consciousness to an extraordinary degree".

And why would a single philosopher taking drugs inspire any student of philosophy anyway? If William James had eaten shit, would that inspire philosophy students to also eat shit? (I have to admit, however, that shit isn't "psychoactive" - at least as far as I know.)

Of course because Jeffries is talking about philosophy, then he simply must mention ancient Greece. And, yes, of course, there's a drug link here. He tells us that the

Hellenistic philosopher Epicurus used tetrapharmakos to designate the four-part means of leading the happiest possible life”.

Perhaps many other educated (or well-off) Greeks also took tetrapharmakos or other mind-stimulating drugs. Nonetheless, this writer ties taking tetrapharmakos with ataraxia (initially a largely Epicurean term) - which is “freedom from worry and distress”.

Again, Stuart Jeffries can't have been serious when he wrote

if you want to understand Hegel or know what it's like to be a bat or Godzilla, try laughing gas”.

Was that statement just a joke without philosophical or even psychological content? If yes, then I may be wrong to get on my high horse. Perhaps the entire article is a joke. Indeed Stuart Jeffries might have written it while under the influence of nitrous oxide.

******************************************

I myself have taken (psychoactive) drugs. I came to realise that what I produced under the influence of such drugs seemed to be good – or sometimes even great/original – at the time; though rarely the morning after!

With cannabis and LSD/mushrooms, I found I could improvise easily and write poems and philosophise. However, I couldn't be bothered with form and construction. It was quite literally all “free improvisation” (i.e., not just when it came to music). Sometimes it was okay. Often it wasn't. Under the influence I didn't have the discipline to compose a musical piece on paper. I never wrote a poem or a philosophy piece which was both structured and detailed either. Nonetheless, I'm still sure - even today - that I had some insights. Drugs also helped me “look at things in a different way”. Again, sometimes these insights were exaggerated while under the influence; though that wasn't always the case.

As for amphetamines, they proved to be the most constructive or effective for me personally. That was in the simple sense that when I was under the influence of amphetamines I could indeed be bothered with form and detail. In other words, I could concentrate on a single issue (or subject) and analyse it.

In terms of musical improvisation on LSD/mushrooms and cannabis, you can achieve a heightened state of both relaxation and concentration on these drugs which is rare in a sober state. Sometime that resulted in decent work. Often it didn't. (That concentration is less analytical than it is under amphetamines and is thus more akin - I suppose! - to a “spiritual state”.)

As ever with drug-use, it depends on the person who's taking the drugs as well as on the kind of drug taken. If the drug-user is already musically literate, artistic, mathematically inclined or whatever, then he may well produce good stuff within these domains. If he isn't, then he'll probably come up with crap.


Note:

1 I found these little gems in Bill Bryson's A Short History of Nearly Everything:

“In the early 1800s there arose in England a fashion for inhaling nitrous oxide, or laughing gas, after it was discovered that its use “was attended by a highly pleasurable thrilling.” For the next half century it would be the drug of choice for young people. One learned body, the Askesian Society, was for a time devoted to little else. Theaters put on “laughing gas evenings” where volunteers could refresh themselves with a robust inhalation and then entertain the audience with their comical staggerings.”

And later in the same book:

“Humphrey Davy [the professor of chemistry] discovered a dozen elements, a fifth of the known total of his day. Davy might have done far more, but unfortunately as a young man he developed an abiding attachments to the buoyant pleasures of nitrous oxide. He grew so attached to the gas that he drew on it (literally) three of four times a day. Eventually, in 1829, it is thought to have killed him.”









Friday 4 August 2017

David Chalmers' Thermostat and its Experiences



[The words "experience" and "consciousness" are used interchangeably in this piece; even though they aren't synonyms.]


**************************

David Chalmers says that “information is everywhere”. Is that really the case?

As some linguists (or pedants?) have said: “If everyone is brave, then no one is brave.” The point being made here is that a term only makes sense if it can be distinguished from non-examples. However, my example is an adjective (“brave”) applied to human persons. The word “information” is a noun. So saying “information is everywhere” is roughly equivalent to saying “dust is everywhere”. 

Information is surely a characteristic of things, events, conditions, etc: rather than a thing in itself. However, none of this may matter. A prima facie problem with the omnipresence of information may fade away on seeing what David Chalmers - and other “information theorists” - have to say about information.

In addition: if, as Chalmers argues,

experience itself as a fundamental feature of the world, alongside mass, charge, and space-time”

then, by definition, experience can't be exclusive to humans or animals generally. Something that's a “fundamental feature of the world” must literally be everywhere; just as Chalmers says about “mass, charge, and space-time”.

This means that Chalmers' linkage of experience to information is thoroughly non-biological.

Chalmers also links experience - therefore information - to thermostats. A thermostat isn't alive; yet it can still be seen as a (to use Chalmers' words) “maximally-simple” information system.

Scott Aaronson (referring to Integrated Information Theory), for one, states one problem with the experience-is-everywhere idea in the following passage:

[IIT] unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly ‘conscious’ at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are ‘slightly’ conscious (which would be fine), but that they can be unboundedly more conscious than humans are.”

Here again it probably needs to be stated that if experience/consciousness = information (or that information – sometimes? - equals experience/consciousness), then experience/consciousness must indeed be everywhere.

However, there's the remaining question: 


Is it the case that information actually is experience or is it that information brings about experience? 

If it's the latter, then we'll simply repeat all the problems we have with both the emergence of one thing from another thing and the reduction of one thing to another thing.

There's also hint of this problem when Chalmers asks us “[w]hy should this sort of processing be responsible for experience?” Here Chalmers uses the word “responsible” (as in “responsible for experience”). In other words, firstly we have processing: then we have experience. So it seems - in this context at least - that processing isn't the same thing as experience. (It is responsible for experience.) And if processing is responsible for experience, so is information. Thus information and experience can't be the same thing.

This may simply be, however, a grammatical fact in that even if information is experience, it can still be grammatically correct to say that “information is responsible for experience”.

What is Information?

The word 'information' has massively different uses; some of which tend to differ strongly from the ones we use in everyday life. Indeed we can use the words of Claude E. Shannon to back this up. He wrote:

"It is hardly to be expected that a single concept of information would satisfactorily account for the numerous possible applications of this general field."

The most important point to realise is that minds (or observers) are usually thought to be required to make information information. However, information is also said to exist without minds/observers. Some philosophers and physicists argue that information existed before human minds; and it will also exist after human minds disappear from the universe. This, of course, raises lots of philosophical and semantic questions.

It may help to compare information with knowledge. The later requires a person, mind or observer. The former (as just stated), may not.

If we move away from David Chalmers, we can cite Guilio Tononi as another example of someone who believes that consciousness/experience simply is information. Thus, if that's an identity statement, then we can invert it and say that 


information is (=) consciousness.

Consciousness doesn't equal just any kind of information; though any kind of information (embodied in a system) may be conscious (at least to some extent).

Indeed, according to Tononi, the mathematical measure of that information (in an informational system) is φ (phi). Not only are systems more than their parts: those systems have various degrees of "informational integration". The higher the informational integration, the more likely that informational system will be conscious. Or, alternatively, the higher the degree of integration, the higher the degree of consciousness.

Integrated Information Theory (IIT) isn't only close to Chalmers' view when it comes to information-equaling-experience, Tononi is also committed to a form (there are many forms) of panpsychism.

The problem (if it is a problem) with arguing that consciousness/experience is information, and that information is everywhere, is that (as has just been said) even basic objects (or systems) have a degree of information. Therefore such basic things (or systems) must also have a degree of consciousness. Or, in IIT speak, all such things (systems) have a “φ value”; which is the measure of the degree of information (therefore consciousness) in the system. Thus Chalmers' thermostat may also have a degree of experience. (Or, for Chalmers, “proto-experience”.)

Clearly we've entered the territory of panpsychism here. Not surprisingly, Tononi is happy with panpsychism; even if his position isn't identical to Chalmers' panprotopsychism.

Interestingly enough, David Chalmers - in one paper at least - doesn't really tell us what information is or what he means by the word “information”. He does tell us, however, that “information is everywhere”. He also tells us about “complex information processing” and “simpler information-processing”. I suppose that in the case of a thermostat, we can guess what information is. Basically, heat and cold are information. Though is heat and cold information for the thermostat? Indeed does that matter? Or is it the case that the actions which are carried out on the heat or cold (by the thermostat) constitute information? Or, perhaps more likely, is it the physical nature (its mechanical and physical innards) of a thermostat that constitutes its information?

To slightly change the subject for a second.

John Searle has a problem with the overuse of the word “computation”. He cites the example of a window as a (to use Chalmers' words again) “maximally-simple” computer. Searle writes:

... the window in front of me is a very simple computer. Window open = 1, window closed = 0. That is, if we accept Turing’s definition according to which anything to which you can assign a 0 and a 1 is a computer, then the window is a simple and trivial computer.”

Searle's basic point is that just about anything can be seen as a computer.
Indeed computers are everywhere – just like Chalmers' experience. Does this tie in with Chalmers' position on information and maximally-simple information-processing?

In other words, does a window contain information? By that I don't mean the information that may exist in a window's material and mechanical structure. (According to many, a window - being a physical thing - must contain information.) I mean to ask whether or not a window - like a thermostat - has information qua a technological device which is designed to be both opened and shut?

Searle will of course conclude that this is an example of information-for-us.

Searle also has something to say about information (not just computers). He writes:

[Koch] is not saying that information causes consciousness; he is saying that certain information just is consciousness, and because information is everywhere, consciousness is everywhere.”

This appears to be the same as Chalmers' position. Needless to say, Searle has a problem. He concludes:

"I think that if you analyze this carefully, you will see that the view is incoherent. Consciousness is independent of an observer. I am conscious no matter what anybody thinks. But information is typically relative to observers...

...These sentences, for example, make sense only relative to our capacity to interpret them. So you can’t explain consciousness by saying it consists of information, because information exists only relative to consciousness.”

As for thermostats, Searle has something to say on them too. He writes:

"I say about my thermostat that it perceives changes in the temperature; I say of my carburettor that it knows when to enrich the mixture; and I say of my computer that its memory is bigger than the memory of the computer I had last year."

This means that this is a Searlian way (as with Dennett) of taking an “intentional stance” towards thermostats. We can treat them - or take them - as intentional (though inanimate) objects. Or we can take them as as-if intentional objects.

The as-if-ness of windows and thermostats is derived from the fact that these inanimate objects have been designed to perceive, know and act. Though this is only as-if perception, as-if knowledge, and as-if action. (Indeed it's only as-if information.) Such things are dependent on human perception, human knowledge, and human action. Perception, knowledge and action require real - or intrinsic - intentionality: not as-if intentionality. Thermostats and windows have a degree of as-if intentionality, derived from (our) intrinsic intentionality. However, according to Searle, despite all these qualifications of as-if intentionality, as-if intentionality is still real’ intentionality; though it's derived from actual/real intentionality.

To get back to Searle's position on information.

For one, it's certainly the case that some – or even many – physicists and mathematicians don't see information in Searle's strictly philosophical or semantic way. In addition, Integrated Information Theory's use of the word 'information' also receives much support in contemporary physics. This support includes how such things as particles and fields are seen in informational terms. As for thermodynamics: if there's an event which affects a dynamic system, then that too can read as being informational input into the system.

Indeed in the field called pancomputationalism, (just about) anything can be deemed to be information. In these cases, that information could be represented or modeled as also being a computational system.

Information may well become information-for-us to such physicists. However, it's still information before it becomes information-for-us.

Perhaps all this boils down to the definition of the word 'information'. The way that some physicists define the word will make it the case that, in Searle's terms, information need not be "observer-relative". On Searle's definition, on the other hand, the word 'information' is defined to make it the case that information must be – or always is – relative to persons (or minds).

Is there anything more to this dispute that rival definitions? Perhaps not. However, in one sense there must be one vital distinction to be made. If information also equals experience, then information not being dependent on human beings makes a big difference. It means that such information is information - and therefore experience - regardless of what we observe or think. However, this is the panpsychist's view; and the physicists just mentioned (those who accept that information need not be observer-relative) don't necessarily also accept that information is the same as experience. Indeed I suspect that most physicists don't believe that.

Thus we now have three positions:

i) Information is relative to observers. (Searle's position.)
ii) Information exists regardless of observers; though it isn't equal to experience. (The position of some physicists and philosophers.)
iii) Information exists regardless of observers and it is also equal to experience. (Chalmers' position.)

A Thermostat and its Experiences

Firstly, let me offer Wikipedia's definition of a thermostat:

A thermostat is a component which senses the temperature of a system so that the system's temperature is maintained near a desired setpoint...

A thermostat exerts control by switching heating or cooling devices on or off, or by regulating the flow of a heat transfer fluid as needed, to maintain the correct temperature...”

What does Chalmers himself mean by the word 'information' when it comes - specifically - to a thermostat? He writes:

Both [thermostats and connectionist models] take an input, perform a quick and easy nonlinear transformation on it, and produce an output.”

As previously stated, in terms of the thermostat at least, information is information-for-us; not information for the thermostat itself. After all, thermostats respond to temperature because we've designed them to do so. Nonetheless, whatever it's doing (even if designed), it's still doing. That is, the thermostat is acting on information. When it's hot, it does one thing. And when it's cold, it does another thing.

Thus does a thermostat have as-if information (to use Searle's term, which is usually applied to intentionality)? Or does it have real (first-order) information? In other words, does the fact that a thermostat is designed by human beings automatically stop it from having experiences which are themselves determined by its informational innards?

After all, humans are also - in a strong sense - designed by their DNA and we certainly have experiences. Thermostats are designed by humans: do they have experiences?

Finally, in one piece Chalmers tackles the case of NETtalk and asks us whether or not it does (or could) instantiate experience. He writes:

NETTALK, then, is not an instantiation of conscious experience; it is only a model of it.”

Of course we can now rewrite that passage in the following way:

A thermostat, then, is not an instantiation of conscious experience; it is a model of it.

The question is, then, whether or not Chalmers has mixed up models with realities (as it were). NETtalk is certainly more complex than a thermostat. However, Chalmers has often argued that complexity in itself (in this case at least) may not matter.

The Appeal of Simplicity & Complexity

Chalmers plays up simplicity. He also plays down complexity. For example, Chalmers writes that “one wonders how relevant this whiff of complexity will ultimately be to the arguments about consciousness”. He goes further when he says that

[o]nce a model with five units, say, is to be regarded as a model of consciousness, surely a model with one unit will also yield some insight”.

I presume that a thermostat has more than “one unit”; though we'd need to know what exactly a unit is.

Chalmers also makes what seems to be an obvious point – at least it seems obvious if one already accepts the information/experience link. He writes:

Surely, somewhere on the continuum between systems with rich and complex conscious experience and systems with no experience at all, there are systems with simple conscious experience. A model with superposition of information seems to be more than we need - why, after all, should not the simplest cases involve information experienced discretely?”

Can we go simpler than a thermostat? Perhaps we can if this is all about information; though that would depend on our position on information. What about a dot on a piece of paper which is then made completely blank (i.e., at a later stage when the dot has been erased with a rubber)?

Chalmers also gives a biological (or “real life”) example of this phenomenon. He writes:

We might imagine a traumatized creature that is blind to every other distinction to which humans are normally sensitive, but which can still experience hot and cold. Despite the lack of superposition, this experience would still qualify as a phenomenology.”

At a prima facie level, it does indeed seem obvious that complexity matters. After all, many theorists have made a strong link between the complexity of the brain and consciousness. Chalmers himself acknowledges the (intuitive) appeal of complexity. He writes:

After all, does it not seem that this rich superposition of information is an inessential element of consciousness?”

Of course Chalmers then rejects this requirement for complexity.

Having said all that, we can also quickly consider Phillip Goff's argument here. He argues that there may be “little minds” (or seats of experience) in the brain, and all of them, on their own, are very simple. Now, of course, we have the problem of the “composition” (or "combination") of all these little minds in order to make a big mind. 

What is Simple Experience?

When Chalmers says that

[w]here there is simple information processing, there is simple experience, and where there is complex information processing, there is complex experience”

what does he mean by “simple experience”? What is a simple experience? How simple can an experience be? Can we even imagine (or conceive) of such a thing?

I suppose I can imagine a very simple pain. (Pain can certainly be experienced.) Or would that only be a mild pain; rather than a simple pain? (Some philosophers have argued that there needs to be more than phenomenology for a pain to be pain.)

What about a simple visual experience? Well, a thermostat can't have such a thing. So what simple experiences can a thermostat have? A thermostat is designed to physically react to the temperature. However, does it feel the temperature? (We can think of feels which are either strongly dependent on sense organs or feels which are purely mental/experiential in nature.) Does a thermostat experience its innards working? That is, does it experience itself taking in information and then responding to that information? But what could that possibly mean? In order to experience itself taking in information, perhaps the thermostat would need to be both an “it” and also an it capable of experiencing itself as an it. That, surely, goes way beyond simple experience.

What has just been said may also apply to a single-celled organism. Does it experience taking in information and then responding to it? Can it feel that information? It can't see or touch it. So what is the experience of information (or the taking in and responding to it) when it comes to a single-celled organism? Sure, causal things happen within a cell. However, things happening within a cell don't - in and of themselves - tell us that it has an experience of things happening within it.

Now what about a mouse? A mouse has a brain and sensory organs. So, obviously, it's vastly different to a single-celled organism and a thermostat. Nonetheless, the idea of a very-simple experience is still problematic.