Tuesday 27 August 2019

An Ontology of an Electron



It is said that the electron “has the properties” of mass, charge and spin. That quirk of grammar (i.e., “has the properties”) makes it seem like the following:

i) Firstly, we have an electron,
ii) and then we also have its properties.

More clearly, this could mean that we have an electron, and only then does it acquire properties. (Much in the same way in which a person would acquire the property being sunburned or being happy.)

It’s more accurate to say that an electron is equal to its properties. Thus:

an electron = charge (−1) + mass (9.109389 × 10 −31 kg) + spin

This is the position known as the “bundle theory” and it’s usually applied to “classical” objects such as persons, cats, rocks, etc. In fact it seems that the bundle theory is more applicable to electrons than it is to, say, trees or persons.

This is just one of literally dozens of “visualisations” of the electron.

It’s partly because of all this that, in 1940, the physicist John Archibald Wheeler (1911- 2008) advanced the strange theory that all electrons may be one. That is, all electrons are literally the same electron. So this isn’t the claim that all electrons have the same properties. It’s the claim that all electrons are the same electron.

Wheeler’s reasons for advancing this theory involve details which aren’t touched upon in this piece. In simple terms, though, different electrons are actually the same electron moving backward and forward in time. Consequently, if this theory were true, then electrons (in the plural) moving backwards in time are actually positrons (which are the antimatter component of electrons — hence anti-electrons).

Yet if there is only one electron, then that electron must be what philosophers call an individual (or particular). And that possibility moves us on to the 17th-century philosopher Leibniz.

Gottfried Leibniz

Gottfried Wilhelm (von) Leibniz argued that all the properties of an object are essential to that object. (There are, of course, arguments against this position.) So surely this claim is truer of an electron than it is of, say, a tree or a person. After all:

i) If an electron were “loose” its properties of mass, charge and spin,
ii) then it wouldn’t be an electron at all.

Or to say the same thing with a little more detail:

i) If an electron didn’t have a charge of -1, a mass of 9.109389 × 10 −31 kg and spin,
ii) then it wouldn’t be an electron.

After all, these three properties are equally essential to a electron being an electron. Indeed even if it lost just one property (say spin), then it would no longer be an electron.

The problem with Leibniz’s position is that it’s only applicable to individuals (or particulars). This is how Leibniz expressed his position:

“The nature of an individual substance or of a complete being is to have a notion so complete that it is sufficient to contain and to allow us to deduce from it all the predicates of the subject to which this notion is attributed.”

That is to say that each “individual substance” has a complete “individual concept”. That complete individual concept contains all the properties of that subject. Or, in a more up-to-date jargon, all the “predicates” which are true of x are also essential to x.

The problem here is that electrons may not be individuals. That is, every electron is identical (though not numerically identical) to every other electron — save for its spatial and locational properties. (This seems to go against Wheeler’s position mentioned a moment ago.) So, on a Leibnizian reading, every electron has the same essence. Therefore every electron can’t be an individual. Boris Johnson, on the other hand, is an individual. That’s because he doesn’t share all his properties with every other person. That is, Boris Johnson’s “individual essence” (or haecceity) is not identical to, say, Swiss Tony’s individual essence.

There is another Leibnizian issue here.

According to the Principle of the Identity of Indiscernibles (PII), no two substances can be qualitatively identical yet still be (numerically) different objects. As you can see, this doesn’t work for electrons. And that’s another reason why they can’t be classed as individuals. That’s unless, as already mentioned, relational (or extrinsic) properties are included. Now, clearly, no two electrons can have the same relational (or extrinsic) properties. So, on a Leibnizian reading, how are we to treat the properties of location, etc. when it comes to electrons? Are they “pseudo-properties” which can simply be ignored? Leibniz himself believed that spatial and temporal properties are indeed genuine properties of the individual itself.

Finally, it’s partly because many laypersons (if they think about these things at all) still see electrons as not being equal (or identical) to their charge, mass, spin, etc. that they see them as also being Newtonian “hard” particles.

Electrons as Package-Deals

In quantum mechanics — and physics generally — we have the notion of a field.

In the specific case of electrons, fields and electrons are intimately connected. Indeed they’re so strongly connected that a distinction between the two hardly seems warranted.

Firstly, there’s the problem of distinguishing electrons — and other particles — from the states they “belong” to. Thus, in an example given by the philosophers James Ladyman and Don Ross (in their book Every Thing Must Go: Metaphysics Naturalised), we can interpret a given field/electron state in two ways:

i) As a two-particle state.
ii) As a single state in which two “two particles [are] interchanged”.

Since it’s difficult to decipher whether it’s a two-particle state or a single state in which two particles are interchanged, Ladyman and Ross adopt the “alternative metaphysical picture” which “abandons the idea that quantum particles are individuals”. Thus all we have are states. That means that the “positing individuals plus states that are forever inaccessible to them” is deemed to be “ontologically profligate” by Ladyman and Ross.

Ladyman and Ross then back-up the idea that states are more important than individuals (or, what’s more, that there are no individuals) by referring to David Bohm’s theory. In that theory we have the following:

“The dynamics of the theory are such that the properties, like mass, charge, and so on, normally associated with particles are in fact inherent in the quantum field and not in the particles.”

In other words, mass, charge, etc. are properties of states, not of individual electrons. However, doesn’t this position (or reality) have the consequence that a field takes over the role of an electron (or of a collection of particles) in any metaphysics of the quantum world? Thus does this also mean that everything that’s said about electrons can now be said about fields?

On Bohm’s picture ( if not Ladyman and Ross’s) “[i]t seems that the particles only have position”. Yes; surely it must be a electron (not a field) which has a position. Indeed electrons also have trajectories which account for their different positions.

To Bohm (at least according to Ladyman and Ross), “trajectories are enough to individuate particles”. Prima facie, it may seem strange that trajectories can individuate. Unless that means that each type of particle has a specific type of trajectory. Thus the type trajectory tells you the type of particle involved in that trajectory.

Nonetheless, Ladyman and Ross spot a problem with Bohm’s theory. That problem is summed up in this way:

If all we have is trajectory, then why not dispense with electrons (as individuals at least) altogether?

This is how Ladyman and Ross explain their stance on Bohm’s theory:

“We may be happy that trajectories are enough to individuate particles in Bohm theory, but what will distinguish an ‘empty’ trajectory from an ‘occupied’ one?”

Here again Ladyman and Ross are basically saying that if all we’ve got are trajectories, then let’s stick with them and eliminate electrons (as individuals) altogether.

Ladyman and Ross go into more detail on this by saying that

“[s]ince none of the physical properties ascribed to the particle will actually inhere in points of the trajectory, giving content to the claim that there is actually a ‘particle’ there would seem to require some notion of the raw stuff of the particle; in other words haecceities seem to be needed for the individuality of particles of Bohm theory too”.

If Ladyman and Ross’s physics is correct, then what they say makes sense. Positing electrons seems to run free of Occam’s razor. That is, Bohm is filling the universe’s already-existing (to mix two metaphors) ontological slums with yet more (superfluous?) entities .

One way of interpreting all this is by citing two different positions. Thus:

i) The positing of electrons as individuals which exist in and of themselves.
ii) The positing of electrons as part of a package-deal which includes fields, states, trajectories, etc. (In other words, there’s no reason to get rid of electrons completely.)

Then there’s Ladyman and Ross’s position.

iii) If there are never any electrons in “splendid isolation” (apart from fields, etc.), then why see electrons as being individuals in the first place?

Ladyman and Ross are a little more precise as to why they endorse iii) above. They make the metaphysical point that “haecceities seem to be needed for the individuality of particles of Bohm’s theory too”. In other words, in order for electrons to exist as individuals (as well as to be taken as existing as individuals), they’ll require “individual essences” in order to be individuated. However, if the nature of an electron necessarily involves fields, states, other particles, trajectories, etc., then it’s very hard (or even impossible) to make sense of the idea that an electron could have an individual — or indeed any — essence.

In simple terms, a specific electron — and any other particle — is part of a package-deal. Electrons simply can’t be individuated without reference to external, extrinsic or relational factors. Thus electrons simply aren’t individuals at all.

[I can be found on Twitter here.]

Monday 26 August 2019

Artificial Life and the Ultra-Functionalism of Christopher Langton


Word-count: 1,904

Contents:
i) Introduction
ii) John von Neumann
iii) The Creation of Artificial Life

The American computer scientist Christopher Gale Langton was born 1948. He was a founder of the field of artificial life. He coined the term “artificial life” in the late 1980s.

Langton joined the Santa Fe Institute in its early days. He left the institute in the late 1990s. Langton then gave up his work on artificial life and stopped publishing his research.

*************************************

When it came to Artificial Life (AL), Christopher G. Langton didn't hold back. In the following passage he puts the exciting case for AL:

It's going to be hard for people to accept the idea that machines can be alive as people, and that there's nothing special about our life that's not achievable by any other kind of stuff out there, if that stuff is put together in the right way. It's going to be as hard for people to accept that as it was for Galileo's contemporaries to accept the fact that Earth was not at the center of the universe.”

The important and relevant part of the passage above is:

[T]here's nothing special about our life that's not achievable by any other kind of stuff out there...”

Although the above isn't a definition of functionalism, it nonetheless has obvious and important functionalist implications.

So when it comes to both Artificial Life and Artificial Intelligence, the computer scientist Christopher Langton seems to have explicitly stated that biology doesn't matter. Yes; it of course matters to biological life and biological intelligence; though not to life and intelligence generically interpreted.

The biologist and cognitive scientist Francisco Varela put the opposite position (as it were) to Langton's when he told us that he “disagree[s] with [Langton's] reading of artificial life as being functionalist”. Varela continues:

By this I refer to his idea that the pattern is the thing. In contrast, there's the kind of biology in which there's an irreducible side to the situatedness of the organism and its history...”

What's more:

Functionalism was a great tradition in artificial intelligence; it's what early AI was all about.”

So we have specific biologies. Those specific biologies are situated in specific environments. And then we must also consider the specific histories of those specific biological organisms. So, if “early AI” was “all about” functions and nothing else, then that was surely to lave out a lot. (From a philosophical angle, we must also include externalist arguments, as well as embodiment and embeddedness – i.e., not only Varela's “situatedness”.)

The physicist J. Doyne Farmer also attempted to sum up the problematic stance which Langton held. He writes:

The demonstration of a purely logical system, existing only in an abstract mathematical world, is the goal that [Christopher Langton] and others are working towards.”

Yet we mustn't characterise Langton's position as mere chauvinism against biology and even against biological evolution. After all, despite what Verela says about situatedness, Langton was fully aware that

[a]nything that existed in nature had to behave in the context of a zillion other things out there behaving and interacting with”.

Langton also appeared to criticise (early?) AI for “effectively ignor[ing] the architecture of the brain”. That's not a good thing to do because Langton went on to say that he “think[s] the difference in architecture is crucial””. Nonetheless, the sophistication of this view is that just as functions and algorithms can be instantiated/realised in many materials, so too can different architectures.

The aspect of the brain's architecture that specifically interested Langton is that it is “dynamical” and also involves “parallel distributed systems” (which are “nonlinear”). Indeed he appears to have complimented what he calls “nature” for “tak[ing] advantage of” such things. And, by “nature”, Langton surely must have meant “biology”. (Though there are dynamical and non-linear natural systems which aren't biological.)

So the early AI workers ignored the brain's architecture; whereas Langton appeared to arguing that artificial architectures (alongside functions and algorithms) must also be created. This, then, may be a mid-way position between the purely “abstract mathematical world” of early AI and the blind simulation of biological brains and organisms.

Having said all the above, Langton shifts in his middle-ground again when he says that Artificial Life

isn't the same thing as computational biology, which primarily restricts itself to computational problems arising in the attempt to analyse biological data, such as algorithms for matching protein sequences to gene sequences”.

Langton continues by saying that

[a]rtificial life reaches far beyond computational biology. For example, AL investigates evolution by studying evolving populations of computer programs – entities that aren't even attempting to be anything like 'natural' organisms".

So Langton believes that AL theorists shouldn't “restrict[]” themselves to “biological data”, despite his earlier comments about noting the architecture of the biological brain (specifically, its parallel distributed processes, etc.). Yet again, Langton either appears to be standing in a mid-way position. Or, less likely, Langton appears to contradicting himself on the precise relation between Artificial Life and biology. 

John von Neumann

Langton cited the case of John von Neumann; who, some 50 years before his work, also attempted to create artificial life. Neumann's fundamental idea (at least according to Langton) is that “we could learn a lot even if we didn't try to model some specific existing thing”.

Now it can be said that when theorists and technologists create life (or attempt to create life), then they're only creating a replication/simulation of biological life. Von Neumann wanted to go further than this... and so too did Langton.

To sum up the opposition in clear and simple words, J. Doyne Farmer says that

Von Neumann's automaton has some of the properties of a living system, but it is still not alive”.

So if von Neumann wasn't concerned with the specific biologies of specific creatures, then what was he concerned with? According to Langton again:

Von Neumann went after the logical basis, rather than the material basis, of a biological process.”

Even though it was said (a moment ago) that Neumann and Langton weren't interested in replication, they still, nonetheless, studied “biological processes”. And functionalists are keen to say that the “material basis” simply doesn't matter. Yet if biological processes are still studied, then perhaps the philosopher Patricia Churchland's warnings to functionalists may not always be completely apt (i.e., about brain and mind). After all, she writes:

[T]he less known about the actual pumps and pulleys of the embodiment of mental life [by functionalists], the better, for the less there is to clutter up one's functionally oriented research.”

Indeed that position can be seen as the very essence of most (or all) functionalist positions. It's most technically and graphically shown in the “multiple realizability” argument in which it is said that function x can have any number of material bases and still function as function x. (The multiple realizability argument is found most often in the philosophy of mind.)

Von Neumann provided a specific example of his search for the logical bases of biological processes. Not surprisingly, since he was concerned with artificial life, he

attempt[ed] to abstract the logic of self-reproduction without trying to capture the mechanics of self-reproduction (which were not known in the late 1940s, when he started his investigations)”.

Prima facie, it's difficult to have any intuitive idea of how the word “logic” (or “logical”) is being used here. Isn't the logic of self-reproduction... well, self-reproduction? After all, without the “mechanics”, what else have we got?

It seemed, then, that the logic of self-reproduction (as well as self-replication, etc.) could be captured by an algorithm. In this case, “one could have a machine, in the sense of an algorithm, that would reproduce itself”. (Is the machine something that carries out the algorithm or is it actually the algorithm itself?) In more detail, the logic of the biological process of self-reproduction is captured in terms of genes and what the genes do. Thus genetic information has to do the following:

(1) it had to be interpreted as instructions for constructing itself or its offspring, and
(2) it had to be copied passively, without being interpreted.

Now this is von Neumann's logic of self-reproduction – and no (technical) biological knowledge was required to decipher those two very simple points. And, by “no biological knowledge”, I mean no knowledge of how information is stored in DNA. (That came later - in 1953.) Langton concluded something very fundamental from this. He wrote:

It was a far-reaching and very prescient thing to realise that one could learn something about 'real biology' by studying something that was not real biology – by trying to get at the underlying 'bio-logic' of life.”

As mentioned earlier, it may be the case that Langton over-stated his case here. After all, even he said that “biological processes” are studied – and indeed that's obviously the case. So we may have the “logical basis” of biological processes; though, evidently, biological processes aren't completely ignored. To put all that in a question:

Did von Neumann ever discover that this logic was instantiated/realised in any non-biological processes?

Earlier Francisco Verela was quoted citing the importance of “situatedness” and “history”. These two factors are obliquely mentioned by J. Doyne Farmer in the specific case or organisms and reproduction. He says:

Real organisms do more than just reproduce themselves; they also repair themselves. Real organisms survive in the noisy environment of the real world. Real organisms were not set in place, fully formed, hand-engineered down to the smallest detail, by a conscious God; they arose spontaneously through a process of self-organisation.”

At first sight, it seems odd that when von Neumann attempted to create artificial life and artificial evolution (or at least simulate artificial life and artificial evolution) that he seemed to have ignored “real organisms” and their surviving “in the noisy environment of the real world”. That is, Neumann's cellular automata were indeed “hand-engineered down to the smallest detail” and then “set in place”. In other words, von Neumann was the god of his own cellular automata. So no wonder Farmer sees such things in the exclusive terms of an "abstract mathematical world”.

The Creation of Artificial Life

On the one hand, there's the study of the "logic" of biological processes. On the other hand, there's the actual creation of artificial life.

The first step is to realise that logic in non-biological material. Will that automatically bring forth artificial life? Langton believed that it does (not will) – at least in some cases. That is, the simulation of life is actually the realisation (or instantiation) of life. Yet, according to Langton himself, “[m]any biologists wouldn't agree” with all this. They argue that “we're only simulating evolution”. However, Langton had an extremely radical position on this simulation-realisation binary opposition. He wrote:

[W]hat's the difference between the process of evolution in a computer and the process of evolution outside the computer?”

Then Langton explained why there's no fundamental or relevant difference. He continued:

The entities that being evolved [inside the computer] are made of different stuff, but the process is identical.”

So, again, it's the process (or the logic of the process) that's important, not the nature of the “stuff” that realises that (abstracted) process. Thus process (or function) is everything. Conversely, the material (or stuff) is irrelevant.

****************************



Tuesday 13 August 2019

Neuropsychologist Nicholas Humphrey vs. Philosopher Daniel Dennett


Word Count: 670

It can be argued that the exact antithesis of Daniel Dennett's position (on consciousness) is put forward by the neuropsychologist Nicholas Humphrey. This is graphically shown in Humphrey's phrase “the experience of raw being”. That is, a level of consciousness that's worlds away from Dennett's propositional or functional consciousness. In addition, it's a “lower-level” of consciousness that's “unreflected on”. In more detail:

“[P]rimitive sensations of light, cold, smell, taste, touch, pain; a the is-ness, the present tense of sensory experience, which doesn't require any further analysis or introspective awareness to be there for us but is just as ? Of existence.”

If such a state, what would Dennett have to hold onto? Where's the third-person heterophenomenology in all of this? Where are the judgments or the “verbal reports”?

It doesn't help that Humphrey uses terms like “the is-ness” and the like. The problem, then, is that Humphrey's account may be seen like a position on some kind of "spiritual" or Zen-like states. However, I believe that Humphrey hasn't all this in mind at all. These aren't Zen-like or spiritual states of consciousness he's referring to. They're states that most of us experience at certain points each and every day. And neither are these “raw” states the end result of mental preparations and will power; as spiritual states often are. I would argue that they are the givens of everyday experience or consciousness.

In any case, Dennett would surely reject these conscious states. He'd be an eliminativist and a verificationist about them. They are, after all, private (if in a very rudimentary sense). Still, these raw states don't necessarily fall foul of the strictures of behaviourists, Wittgenstein, Quine, functionalists and the rest. That's because nothing much is being claimed for them. They aren't the “source of meaning” (which Wittgenstein and Quine rejected). They don't constitute a "private language" of any kind. And neither do they constitute first-person “infallible knowledge”. They're simply basic experiences. Therefore, almost by definition, they're outside of science and Dennett's own philosophical heterophenomenology and verificationism.

Basically, one can admit that they serve no purpose... as such. Nonetheless, they do serve a purpose for Humphrey himself in that they show him

that is what's it's like to be me, or what it's like to be a dog, or what it's like to be a baby”.

So we don't need to build a philosophical edifice on top of Humphrey's “raw being” or “is-ness”; as Descartes, the phenomenologists and others did. We simply need to accept that these states are part of consciousness. And they're essentially above and beyond all that's functional, cognitive, judgmental, propositional and the like.

Earlier the lack of purpose for these basic conscious states was mentioned. And that's why Dennett said that following to Humphrey:

Look, I hear what you're saying, but I simply don't have any reference point for it. Your raw sensations, if they exist, leave nothing behind. They might as well never have occurred.”

So as was said earlier:

i) If the conscious states Humphrey refers to have no “reference point”, then they serve no purpose.
ii) If they “leave nothing behind”, then they serve no purpose.
iii) And if they “might as well never have occurred”, then they serve no purpose.

In other words, to Dennett, all this basically means (surely) that even if these consciousness states serve no purpose, then they don't/mustn't exist. That really does seem to be Dennett's position. And it is pure verificationism and indeed eliminativism at one and the same time.

This conclusion, therefore, is completely at one with Humphrey when he responds to Dennett by saying that

[f]or Dan, the basic constituents of consciousness are ideas, judgements, propositions, and so on”.


For Dan, if there's nothing left after the sensation has passed – nothing in the way of a text, which says something like, 'Memo to self: have just had a sensation' – then it didn't happen.”

Dennett's position is extreme. (At least it is to me.) What's more, I can't help believing that it's obviously false.




Monday 12 August 2019

John Searle on Brain-Consciousness Causation

Searle argues that it’s not that “brain processes cause consciousness”. It’s that “consciousness is itself a feature of the brain”. A loose analogy is a table’s weight causing an indented rug. The table’s weight on the rug and the indentation occur at one and the same time.

Daniel Dennett (left) and John Searle

The American philosopher Daniel Dennett (1942-) once claimed that John Searle’s position is that the brain “secretes” consciousness. I suspect that Searle would argue that consciousness can’t be secreted out of anything — even out of the brain. That’s because Searle sees consciousness as a higher-level attribute of the brain — yes, of the brain. This means that the idea of any secretion of consciousness out of a brain it is already part of doesn’t really make sense.

[It seems that someone — perhaps Dennett? — once said to Searle: “It sounds like you’re saying the brain ‘secretes’ consciousness the way the stomach secretes acid or the liver.” Searle replied: “I have never claimed that [].”]

So what about the word “cause”?

If it can be stressed (as Searle does) that consciousness is a higher-level feature of the brain, then how can the brain cause consciousness?

The problem here, according to Searle, is that many people have a misconception about causation —at least when it comes to the brain causing consciousness.

The most important point which Searle makes (in this discussion of consciousness) is that causation needn’t be seen in the following temporal terms:

a cause C followed (in time) by effect E

Here we have two things: a cause followed by an effect. That is, the cause occurs at time t¹ and the effect occurs at tⁿ. This surely implies a kind of dualism for the simple reason that the brain’s cause-events occur before the effect-event that is consciousness (or a conscious state). Thus these two things (i.e., brain processes and consciousness) must be separated in that the former causes the latter. However, what if we can have processes which don’t entail a cause followed by an effect? That is, in which the processes of consciousness occur at the very same time as the brain processes.

Searle gives an example of a heavy table and the impression it makes on a rug.

This isn’t a question of the following:

the table-pressure-event (or state) causing the indented-rug-event (or state).

The weight of the table and the indented rug occur at one and the same time. Yet this is still a casual process. However, it’s not a case of a cause followed by an effect. It’s a causal process of the weight and the indentation occurring at one and the same time.

As Searle puts it: the gravitational force of the table shouldn’t be taken as an event which occurs before the indentation of the rug which is under it. Moreover, Searle states that “gravity is not an event” at all — it’s a force which is always there.

Searle’s next example is about tables and their solidity.

It can be said that the density (or nature) of the table’s molecules doesn’t cause the “solidity of the table”. That is, we don’t have a cause (i.e., the molecules and their behaviour) followed by an effect (i.e., the solidity of the table). Instead, whenever there are certain kinds of molecules of this specific density and structure, then we’ll also have a solid table. Such molecules don’t come first and then cause the solidity of the table. That last possibility would mean that there was a point in which we had that very same table (made up of the same molecules and the same structure); but in which the table wasn’t solid (say, it was fluid or floppy). Instead, as soon as we have that configuration and that set of molecules, we also have the table’s solidity. The one doesn’t come before the other.

That said, it’s still the causal processes of the molecules which are responsible for the solidity of the table. That is, we still have causation and causal processes. It’s just that we don’t have a cause followed by an effect.

One can see where Searle’s line of argument is going here.

It can now be argued that we shouldn’t see the brain’s processes (or states) as causes which bring about the processes (or states) of consciousness. Instead, we have brain processes (or states) and consciousness at one and the same time. That is, the brain’s processes (or states) don’t come before the processes (or states) of consciousness (or before consciousness itself). Both occur together.

Searle himself writes:

“Lower-level processes in the brain cause my present state of consciousness, but that state is not a separate entity from my brain; rather it is just a feature of my brain at the present time. [It’s not] that brain processes cause consciousness but that consciousness is itself a feature of the brain [].”

It can still be argued that the “lower-level processes in the brain cause my present state of consciousness”. So the word “cause” can still be used. However, we don’t have the following:

causes (or brain processes) which comes before an effect (consciousness or a mental state)

Consciousness (or a conscious state) “is not a separate entity from my brain”. It is, instead, “a feature of my brain”.

So Searle’s position isn’t too unlike Donald Davidson’s notion of “conceptual pluralism” (as expressed in his anomalous monism) squared with his “substance monism”. That is, the brain and consciousness are seen as being part of the same (loosely) substance— if with different features or properties which are characterised both from the subject’s inside and from the third-person outside. That is, we can apply different concepts to consciousness (or mind) which we wouldn’t apply to the brain itself. However, consciousness it still just a “feature of the brain”. It’s not something different. It’s not another substance.

So this argument works against mind-body dualism. However, it’s not a case of reductive physicalism either. Instead, Searle simply denies the duality of brain and consciousness (or matter and mind) in the first place.

And because Searle’s position on causation is (perhaps) odd when it comes to this notion of atemporal causation, then it may be wise to finish with citing what another philosopher thinks of it.

Take Nick Fotion. He tells us that Searle’s theory

“shows that the biological mechanisms on the lower level of the diagram have their causal effects on the upper level not over a period of time”.

Fotion continues:

“The emergent changes on the upper level are simultaneous with respect to what happens (vertically) below. Such is not the case when the mind affects the body on any level.”

Of course the central problem here is one of making sense of atemporal causation. That’s primarily because it’s usually believed that a sequence in time is built into the very notion of causation (or causality) — even when it comes to “backward causality”. What’s more, it seems that Searle could argue just about everything he does argue without also employing the notion of causation (or using the word “cause”). That is, he could argue that brain processes occur at the very same time as their — parallel? — conscious states without bringing causation on board at all.

All that said, this particular issue must be tackled at another time.

Tuesday 6 August 2019

Philosophical Shorts (2)


The Function of Experience

It's always odd when philosophers ask about “the function of experience” (or consciousness). After all, isn't it blatantly obvious what the many functions of experience/consciousness are? Don't we experience experience functioning ever day of our lives? Indeed every (waking) minute of our lives?

Though, the argument goes, we could be wrong about all this.

(The functions the philosopher David Chalmers always refers to are “perceptual discrimination, categorization, internal access, verbal report”.)

Questions about the function of experience/consciousness occur, primarily because many cognitive and behavioural functions do - and could - occur “in the dark” - that is, without experience/consciousness.

However, the following argument seems invalid:

i) Many cognitive and behavioural functions occur without experience/consciousness and they could occur without experience/consciousness.
ii) Therefore experience/consciousness has no function

But why not the following (not an argument)? -

i) Many cognitive and behavioural functions occur without experience/consciousness and they could occur without experience/consciousness.
ii) However, experience/consciousness still has a function.

Experience could (or does) add extra functions into the pot. So the argument above is not that unlike the following:

i) It is a fact that people drink water without using cups and they could drink water without using cups. (They drink water straight from the tap, from old boots, out of streams, etc.)
ii) Therefore cups have no function.

There are two other important reasons to question the function of experience/consciousness:

1) Experience/consciousness is epiphenomenal.
2) Although we believe that our experiences have a function; they don't. (A position advanced, I believe, by Daniel Dennett - though perhaps not as explicitly as this.)

The Why of the Big Bang

If one explains the Big Bang in terms of processes, forces, fields, particles, events, etc, then this is explaining how it came about. Yet someone may ask why it came about. What does portentous ‘why?’ mean in this context? 

If there is such a why to the Big Bang, then that may mean that it came about for some reason (or purpose). This may also mean that if the questioner doesn't allow the reason (or purpose) to be contained within such processes and interactions, then the reason (or purpose) for the Big Bang must be outside the event itself. In order for something to exist outside the Big Bang, it must exist outside of time and space. It must also be non-material.

Must it be God? But who created God? And if God can be a self-creator, then why not the universe too?

(The possibility of a multiverse, an infinite universe, etc., of course, complicates this issue.)

An Infinite Universe?

There's a paradox inherent in the idea of an infinite universe. An infinite future is possible; though perhaps not an infinite past. The argument against an infinite past has nothing to do with the belief that the universe must have been created at some time. It has to do with the implication which is inherent in the possibility of an infinite past itself. That is, if the past were infinite, then everything that could or might have happened would have happened. This conclusion quite clearly doesn't make sense as far as our own universe is concerned; though it is made possible if there are other universes. (Or, I should say, other universes within a greater universe – i.e., the multiverse.)

There is another possible scenario. That everything has happened within our universe, but all was destroyed by a previous contraction of the universe and we're now living in the very early stages of just one more expansion of a universe (ours) which that has expanded and contracted many times before!

There is an obvious problem here too. Literally everything couldn't have happened. There are two things left out here. One, technological developments in previous expansions of our universe – ones which might have stopped the universe from contracting. Two, and less feasibly, the destruction of the entire universe and multiverse.