Monday, 27 February 2023

Isaac Newton’s Empiricist Philosophy of Science

Some critics (or downplayers) of the philosophy of science (or philosophy generally) may argue that Isaac Newton didn’t actually have a philosophy of science at all. Instead, he simply did what he did. That is, no matter how complex and reasoned Newton’s maths, observations, experiments, etc. were, it was still “just science”.

“Empiricism is the epistemology which has tried to make sense of the role of observation in the certification of scientific knowledge.” — Alex Rosenberg (see source here)

It’s true that Isaac Newton never used the words “philosophy of science” about his own words and work. Indeed, he wasn’t overly self-conscious about the philosophical and methodological underpinnings of his science. But none of this means that Newton didn’t actually have a philosophy of science or that he didn’t uphold philosophical positions (i.e., as they were directly relevant to his work) on the nature of science itself.

In addition, the classification empiricist science may seem like a truism or even a tautology. At least it’s what many laypeople (though not really many scientists) take science to be anyway. That is, science is often deemed to be mainly (or even only) about observations and experiments… Or at least many laypeople believed that it should be!

All that said, it’s now worth saying that these statements have nothing directly to do with the commonplace fact that science was classed as natural philosophy in the 17th century. In addition, it has nothing to do with the general idea that philosophical ideas and theories underpinned much science in those days (as they continue to do so today).

So, then, this essay is purely about Newton’s very own philosophy of science.

It’s also worth noting here (i.e., to set the scene) that David Hume’s more self-conscious and obvious general philosophy is in some ways like Isaac Newton’s philosophy of science. Indeed, often all one really needs to do is substitute Hume’s 18th-century term “impressions” (or “sense impressions”) for Newton’s earlier term “phenomena”. More relevantly, Hume believed that only concepts “derived from” impressions were relevant and significant. One can even read Hume as arguing that sense impressions are the exclusive source of all the knowledge we have of what he called “matters of fact”.

In terms of concrete examples, then, Hume had a problem with such “hypothetical entities'” as substance, vacuum, necessary connection, the self, etc. (All this can be found in Hume’s book An Enquiry Concerning Human Understanding.) And, similarly, Newton had a problem with such things as the aether, corpuscles and what he called “occult properties”.

All that said, an empiricist tradition can be said to go all the way back to Aristotle and even before him. After all, the ancient Greek philosopher did believe (or state) that “there is nothing in the intellect which was not first in the senses”.

Phenomena First

Isaac Newton stressed that scientists should observe what he called “phenomena”. (It’s hard to imagine many scientists — or even philosophers — not doing so.)

For example, Newton argued (as did Aristotle before him) that a scientist (or a natural philosopher) should carefully examine the world around him. In Newton’s own words:

“[A]lthough the arguing from Experiments and Observations by Induction be no Demonstration of general Conclusions, yet it is the best way of arguing which the Nature of Things admits of.”

It can be seen that Newton wasn’t a naive (or absolute) empiricist. In other words, he never believed it was literally all about “experiments and observations”. Yet, arguably, very few scientists — or even philosophers — have ever thought that way in any case. In Newton’s own example, then, he did say that he was arguing “from” experiments and observations. That is, he didn’t say that experiments and observations were the beginning and the end of all science. So Newton’s science was certainly no mere (as the phrase has it) “cataloguing of observations” either.

Still, Newton not being an absolute empiricist doesn’t mean that he wasn’t an empiricist at all.

Technically, Newton’s “conclusions” came from his experiments and observations. Thus, his conclusions weren’t merely (or exclusively) statements about the his actual experiments and observations. This also meant that any moves from experiments and observations to conclusions weren’t determinate or necessary. And that also had the consequence that Newton’s own experiments and observations (indeed all experiments and observations) could have led to different conclusions. Indeed, Newton was well aware of that.

Induction and Deduction

Newton believed that scientific processes must be kept in check in two ways:

(1) Inductive evidence must provide the groundwork of scientific work. 
(2) The consequences derived from inductive evidence must themselves be experimentally confirmed.

Of course, Newton’s use of the word “induction” needs to be fleshed out a little.

Arguably, there’s no such thing as a purely inductive process. And that’s even before any general “conclusions” are formulated.

Yet all this entirely depends on what Newton — as well as others — meant by the word “induction” in the 17th century.

More specifically, it’s a little difficult to know (or simply accept) what Newton meant by the phrase “arguing from Experiments and Observations by Induction”. That’s primarily because even during any inductive process there’ll still be other processes being employed. In addition, there’ll be theories, biases, prior knowledge, etc. which explain why those particular experiments were carried out in the first place. What’s more, it can be asked why Newton (or anyone else) observed those particular phenomena. Here again, theories, biases, preferences, scientific (as well as other) traditions, etc. must have been lurking in the background all along.

Of course, Newton did note that science involved both induction and deduction (i.e., not only one at the exclusion of the other). He did, after all, stress the drawing out of what he called “consequences”. Yet, here again, this almost seems obvious. Indeed, this stress on both induction and deduction can previously be found in the work of Roger Bacon, Robert Grosseteste, and, later, in the work of Galileo and Francis Bacon.

All that said, Newton certain did (as it were) come down on the side of induction (i.e., as against deduction).

For example, Newton once wrote that “particular propositions are inferred from the phenomena, and afterwards rendered general by induction”. This Newtonian and inductive methodology was applied to the following real cases. In Newton’s own words:

“Thus it was that the impenetrability, the mobility, and the impulsive forces of bodies, and the laws of motion and of gravitation, were discovered.”

But, here again, one can ask: Why these phenomena?

In this case, then, was Newton looking for the properties of impenetrability and impulsive force? And why did he assume any “laws” at all?

So it can even be argued that Newton’s own (as it were) grounding phenomena were already drenched in both theory and philosophy (or metaphysics) from the very beginning.

What’s more, even if Newton’s phenomena were pure, he still never discussed (let alone argued for) that transition from phenomena to the laws of motion or the laws of gravitation. As Rationalists may put it (to use a line of argument found in Laurence BonJour -see here), the phenomena may be as pure and empirical as you like. However, how did Newton (or anyone else) explain the links from the phenomena to any general laws or conclusions? After all, these links aren’t themselves phenomena and neither are they (i.e., in themselves) examples of induction. (This is the case even if the whole process itself can be deemed to be inductive in nature.)

Hypotheses Non Fingo

To put it simply: Newton believed that “theories” were acceptable, and that “hypotheses” were (largely) unacceptable. Of course, Newton used these terms in his very own (17th-century) way.

Yet, predictably, Newton broke his own rules on this strong distinction between theories and hypothesises (as we shall now see).

Newton’s term “theory” is used for that which can be “deduced from” what is observed and/or experimentally produced (or noted). Thus, inductive evidence can lead to a theory about such evidence. This Newtonian account of a scientific theory isn’t really a million miles from how the term is used today by educated laypersons and even by some scientists. However, Newton’s use of the word “hypothesis” is very odd to 21st-century ears.

For example, Newton once claim that hypotheses were (in relevant cases) about what he called “occult qualities”. And what are occult properties? In basic terms, they’re properties which can’t be observed or “measured”. (It’s odd, then, that some modern day — well — “occultists”, “spiritualists”, etc. often state “That’s just a theory!” to those scientists they disagree with.)

Thus, Newton didn’t like his own theories being classed as hypotheses.

A good example of Newton’s inductivist and/or empiricist inclinations (i.e., as they relate to this theory-hypothesis distinction) was his aversion to (using Stephen Toulmin’s phrase) “going beyond the phenomena”.

Take Newton’s theory of light.

Newton distinguished what can be observed from what may (or may not) underlie what it is scientists observe. In this case, then, scientists can observe certain properties of refraction. Thus, they can form a theory about such properties because they can observe them. However, they can’t observe what may (or may not) underlie such properties. In Newton’s case, then, scientists couldn’t observe (invisible) “waves” or “corpuscles”.

Now take the important — and similar — case of gravitational attraction.

Newton didn’t (or simply claimed not to) hypothesise about the underlying causes of gravitational attraction. Indeed, Newton famously stated the following three words: Hypotheses non fingo (“I frame no hypotheses”). Instead, and at least according to his own self-image, Newton simply noted phenomena and (as it were) observed what they did. And clearly — in this case at least — Newton couldn’t observe the underlying causes of gravitational attraction. (Some contemporary analytic philosophers may say that Newton couldn’t observe what they call “intrinsic properties”.)

The French philosopher René Descartes (1596–1650), on the other hand, did believe that he could hypothesise (or even know) about such underlying causes! He believed that gravitational attraction could be (or is) explained by vortices of aether.

Yet, despite all that, Newton did indeed hypothesise.

For example, Newton accepted that the aforementioned corpuscles and the aether may well exist. However, he still noted that scientists simply couldn’t observe such things. Thus, at best, Newton concluded that what came to be called “hypothetical entities” (or “theoretical entities”) may indeed be of some use in scientific research. However, scientists should never play fast and loose with such entities.

My flickr account.


Thursday, 23 February 2023

Erwin Schrödinger on the Many Worlds of the Wave Function

Schrödinger once raised the possibility that the wave function’s “great many alternatives may not be alternatives [at all], but all really happen simultaneously”. He admitted that this idea may “seem lunatic” to “quantum theorists”. The physicist David Deutsch believes these words to be the earliest known reference to what came to be called “many worlds”.

Much that’s controversial and (as it’s often put) weird about quantum mechanics (or at least the wave function) is discussed in the following passage, which was once spoken by Erwin Schrödinger:

“Nearly every result [the quantum theorist] pronounces is about the probability of this or that or that … happening — with usually a great many alternatives. The idea that they may not be alternatives but all really happen simultaneously seems lunatic to him, just impossible. He thinks that if the laws of nature took this form for, let me say, a quarter of an hour, we should find our surroundings rapidly turning into a quagmire, or sort of a featureless jelly or plasma, all contours becoming blurred, we ourselves probably becoming jelly fish. It is strange that he should believe this. For I understand he grants that unobserved nature does behave this way — namely according to the wave equation. The aforesaid alternatives come into play only when we make an observation — which need, of course, not be a scientific observation. Still it would seem that, according to the quantum theorist, nature is prevented from rapid jellification only by our perceiving or observing it … it is a strange decision.”

The physicist Erwin Schrödinger (1887-1961) stated these words in 1952, in a lecture he gave in Dublin. At one point in that lecture he said (to his audience) that his words may “seem lunatic”. One wonders, then, if he felt the same way at the height of the “first quantum revolution” (i.e., from the mid-1920s to the 1930s). Or was this a purely retrospective view?

More relevantly, the British physicist David Deutsch (for one) believes the passage above to be the earliest known reference to what came to be called the “many worlds” (see ‘Many-worlds interpretation’) of the wave function.

The Wave Function’s Contradictory Alternatives

Now let’s break the passage above down a little.

Schrödinger told us that

[n]early every result [the quantum theorist] pronounces is about the probability of this or that or that [] happening — with usually a great many alternatives”.

Schrödinger then offers us his own interpretation of this. He continued by saying that all these probabilities

“may not be alternatives [,] but all really happen simultaneously”.

Well, according to the wave function, they actually do (or simply may) all happen simultaneously…

Or do they?

It depends.

The best way to characterise what Schrödinger meant by the words “great many alternatives” is by citing his own well-known cat experiment. Of course this is a thought experiment about a “classical” object — a cat! The two alternatives here are: cat alive/cat dead. In this thought experiment, then, both alternatives occur at one and the same time.

Here’s the relevant story.

We can make sense of all this by bringing in another universe. That is, if we bring in another universe, then one cat is alive in that universe and another cat (its counterpart) is dead in our own Universe (or vice versa). So when the box is opened, we find only a dead cat. That box-opening is equivalent to an observation (or the collapse) of the wave function.

And, of course, from the very beginning, Schrödinger believed that this collapse is (as it were) weirder that his his own possibility of alternatives happening simultaneously.

Thus, on Schrödinger’s “lunatic” version, the wave function isn’t collapsed at all (or it’s characterised before any collapse). Instead, one cat is both dead and alive…

So why not argue that there’s a dead cat in our universe and alive cat (its counterpart) in another universe?

It must be stressed here that Schrödinger certainly didn’t put any of this in that way. He simply raised the possibility that all these alternatives can happen simultaneously (i.e., if seen in accordance with the wave function). And this, by inference, must be true of all the quantum (or micro) alternatives (i.e., not cats, but particles, etc.) which are part of each wave function.

This issue here can be summed up by saying that such probabilities (or ‘probability amplitudes’) are effectively concretised (or reified) by quantum theorists. Or perhaps it can be said that the wave function itself concretises all (as it were) its probabilities.

So it can basically — as well as accurately — be said that with the (or a) wave function, there are actually many (to use Schrödinger’s words again) “alternatives happening simultaneously”. Thus, such weirdness is entirely a product of the wave function itself.

Do such alternatives simultaneously occur without the (mathematical) wave function?

Of course not.

Indeed, that question hardly makes sense.

We simply don’t know anything about these alternatives (or much else at the quantum level) without the wave function (as well as other mathematics). Thus, in a strong sense, such alternatives (whether happening separately or simultaneously) don’t so much as exist (or have any reality) without the wave function. Indeed, we have no right to speak of “reality” at all in separation of the mathematics.

[Wave functions were used well before the quantum wave function and Schrödinger’s equation.]

So it’s not a surprise that many quantum theorists — and others — have actually questioned the notion of what is called reality. More accurately, what is usually said is something like the following:

Without the mathematics, observations, tests, experiments, etc., there is no reality.

Yet without the wave function itself, there are no observations, experiments, tests, etc. in the first place (at least not ones which can be made scientific sense of).

Schrödinger was, of course, talking about the possibility of all these probabilities (or alternatives) happening simultaneously. Yet most quantum theorists don’t think in terms of all these alternatives existing together in physical reality. It’s the wave function itself which (as it were) makes it seem that way.

Yet in accordance with at least the Copenhagen interpretation, the quantum theorist has no right to say that these many alternatives do not all occur together. That is, if a theorist can’t say anything about an unobserved realm (or a realm beyond the wave equation), then what right has he to say that it can’t be the case that such alternatives all occur together? After all, the wave function is (as it were) telling him that they do all happen together. Thus, in order to demonstrate that they aren’t happening simultaneously, a quantum theorist would need to move beyond the wave function and the maths generally.

But how could he do that?

The Collapse of the Wave Function

Schrödinger pointed out something which has now become commonplace. He stated that there’s nothing in the wave function itself about collapse. Indeed, it was Niels Bohr (and then others) who used this idea to explain why we only find a single (to use Schrödinger’s word again) “alternative” at the end of an experiment, not a quantum superposition of alternatives (or states). What’s more, in a paper published in 1952, Schrödinger went even further when he stated that it’s “patently absurd” that the wave function should

“be controlled in two entirely different ways, at times by the wave equation, but occasionally by direct interference of the observer, not controlled by the wave equation”.

To explain.

The option the quantum theorist has it to (as it’s often put) collapse the wave function. And only then the possibility (or reality) of so many contradictory alternatives existing together is no longer a problem. That is, the collapse (as it were) brings to an end to all these alternatives existing together…

Except that this issue isn’t really solved - at least not philosophically.

That’s simply because we’ve moved from one situation (i.e., the wave function before its collapse) to another situation (i.e., the actual collapse of the wave function). Thus, one situation being non-problematic doesn’t render the prior situation non-problematic. In this case, then, there wouldn’t even be a collapse of the wave function if there wasn’t a previous wave function which (as it were) needed to be collapsed. Thus, you can’t have the non-problematic collapse without the prior problematic wave function (i.e., as it was before the collapse).

All this can be said to make the collapse of the wave function itself as problematic as Schrodinger’s many alternatives happening simultaneously. Indeed, on this reading, the collapse is even more problematic (or weird) than the prior wave function before it was collapsed!

Contradiction and Collapse

To recap.

Schrödinger appeared to suggest (or simply state!) that quantum theorists (at least the ones he was referring to) have wrapped themselves up in some kind of contradiction.

On the one hand, the quantum theorist Schrödinger referred to “grants that unobserved nature does behave this way”. On the other hand, the theorist also believes that “[t]he idea that they may not be alternatives but all really happen simultaneously seems lunatic to him”. Indeed, Schrödinger’s quantum theorist believed that this is “just impossible”!

Yet this is what the wave function tells us.

So does that mean that the wave function itself must be both lunatic and just impossible?

To continue with this story.

Firstly, we have multiple alternatives happening simultaneously. Then the

“aforesaid alternatives come into play only when we make an observation — which need, of course, not be a scientific observation”.

So one of those multiple alternatives comes into play. That is, the wave function is collapsed and what is left is a single one of the previous multiple alternatives. Indeed, it’s collapsed only when “we make an observation”. (Schrödinger states that the observation “need [] not be a scientific observation”.) Thus, what Schrödinger calls “nature” was behaving a certain way, and then it suddenly stops behaving in that certain way when an observation is made.

All this means that Schrödinger appears to have made a metaphysically realist claim (see ‘Metaphysical realism’). He claimed that nature was a certain way before any act of observation. Thus, on this reading at least, you couldn’t get any more realist (see ‘Philosophical realism’) than that. Granted, it’s essentially the wave function (or “what it says”) which is real. Yet the wave function is, after all, supposed to be telling is something about nature. And in that nature there are (or were) multiple (mutually contradictory) alternatives happening simultaneously! [See note at the end of this piece.]

Here it must be said (again) that the the collapse of the wave function is as weird (or even weirder) than the wave function (or what it tells us) itself — at least if we accept this account. After all, the wave function is a certain way, and then a mere observation stops it from being that certain way.

Thus, can we now conclude that a (mere) observation effectively changes reality?

Many Worlds and Jellification

This whole issue is rendered more complicated by Schrodinger’s use of the word “jellification”.

Is this Schrodinger’s hint that he also had a problem with the wave function as it was before any collapse? After all, Schrödinger did say that

“according to the quantum theorist, nature is prevented from rapid jellification only by our perceiving or observing it”.

What did Schrödinger mean by the word “jellification”?

Firstly, the multiple alternatives in the wave function instantiate (or will at least lead to) jellification. That is,

“if the laws of nature took this form for, let me say, a quarter of an hour, we should find our surroundings rapidly turning into a quagmire, or sort of a featureless jelly or plasma, all contours becoming blurred, we ourselves probably becoming jelly fish”.

In other words, the wave function’s alternatives would proliferate indefinitely if not collapsed (or observed). And because there would be so many alternative jostling for the little space which the particular wave function captures, then we’d end up with (after a “quarter of an hour”) a “quagmire” (or “featureless jelly”) of all the many alternatives coalescing together. And that would be (or is) largely because there’s no extra (abstract) room for them to do anything else.

And Schrödinger believed that all this is (to use his own word) “strange” (if not weird).

More importantly, Schrödinger appeared to accept the possible reality (or existence) of the multiple alternatives happening simultaneously. On the other hand, he was certainly unhappy with what’s supposed to happen due to an observation.

Does this mean that Schrödinger accepted the existence of what later came to be called “many worlds” (at least as they existed in this limited and restricted form)? In addition, did Schrodinger believe in these many worlds because, again, he also believed that the collapse (along with the emphasis on observation) is strange?

So is the existence of many worlds (as expressed by the wave function) actually less weird than the collapse of the wave function?

As Schrödinger argued, the wave function (as it were) allows a multitude of contradictory alternatives. What’s more, these alternatives could (or do) increase indefinitely — leading to jellification. And this is a problem certainly noted by many critics of many-worlds theory in more recent decades (see here).

This can also mean that (in a sense) the wave function must be collapsed in order to stop such jellification. Or, less strongly, the quantum theorist (or quantum experimenter) must collapse the wave function in order to make everything more amenable to scientific scrutiny. However, that wouldn’t mean that there was no prior jellification (or that multiple alternatives didn’t happen simultaneously). It simply means that the quantum theorist can’t do anything with such a strange reality. Thus, in a sense, the collapse of the wave function is simply a pragmatic act.

Thus, perhaps the collapse of the wave function isn’t faithful to reality at all. What’s more, many quantum theorists claim that the wave function isn’t even meant to be faithful to reality!

Conclusion

Schrödinger never mentioned many worlds.

However, in order for many contradictory alternatives to happen simultaneously (such as a single particle being spin up and spin down at one and the same time, or being in place x and place y at one and the same time, etc.), then surely placing such things in different worlds will iron out such a contradictory (or “impossible”) reality. That is, even though the possibility of many worlds itself may itself seem bizarre to many people, it doesn’t (or at least it may not) actually involve any contradictions.

Finally, no one paid any attention to Schrödinger’s many-worlds possibility.

Perhaps that’s largely because he himself said it was “lunatic” and admitted that it seemed “impossible”. (At least it seemed that way to what Schrödinger called the “quantum theorist”.) Of course, Hugh Everett (1930–1982) took this idea up, though not necessarily because of anything Schrödinger said in 1952. Everett also introduced the idea of the Universe “splitting” into different versions of itself, which Schrödinger himself never referred to.

************************************

Note: Reality and Pythagorean Physics

The literature has it that there’s something more — and before — the wave function: the quantum state. (More correctly, the quantum state of an isolated system.) The wave function is a mathematical description of that state.

In addition, the word “representation” is often used in quantum mechanics and for the wave function. That is, a given set of observables is represented. The wave function represents that quantum state.

All that, in a certain sense, must be obvious in that it can’t all be about the mathematics…

Or can it?

That’s unless one is a Pythagorean. A Pythagorean would say that even the observables are purely mathematical in nature. That’s primarily because observation in these quantum contexts is nothing like, say, observing the cat next door or even observing a neuron (or brain cell) under a microscope.

On the other hand, surely where there’s a representation, then there must also be something which is represented.

Some quantum theorists — including Erwin Schrödinger himself (along with David Bohm, Hugh Everett, etc.) — believed that the wave function must have a physical (or “objective”) existence. More famously, Albert Einstein believed that a complete description of reality should refer directly to a physical time and space. The wave function itself, on the other hand, is often said to refer only to an abstract mathematical (as well as Pythagorean?) space.

To come at this from a slightly different angle which brings in, specifically, Schrödinger’s equation

Firstly, there’s the wave function, and only then do physicists use Schrödinger’s equation. More relevantly, this mathematical equation describes the wave function. Yet that wave function (or even the wave simpliciter) is itself mathematical (i.e., it’s a mathematical function). It’s also often seen as being exclusively about what physicists — and others — call “information”.

So, in a strong sense, maths is describing maths. Or, less strongly, one part of maths is effectively describing another part of maths.

In addition, the word “description” is often used to explain what both the wave function and Schrödinger’s wave equation do. So it’s not as if the wave function simply is what it is, and only then does Schrödinger’s wave equation describe and/or “solve” it. Instead, something that’s already mathematically descriptive (i.e., the wave function) is then solved by something which is also mathematically descriptive (i.e., Schrödinger’s wave equation).

My flickr account.


Thursday, 16 February 2023

If Consciousness Is a Natural Phenomenon, Then…

 

If consciousness is a natural phenomenon, then it must also be a scientific phenomenon.

So isn’t it the case that all human beings — and probably many animals — instantiate (or “have”) what we call consciousness?

Yet it can be supposed that even the former seemingly innocent set of statements and questions may be problematic. That’s primarily because it largely depends on what we take consciousness to be in the very first place!

For example, on certain accounts, it is indeed the case that consciousnesses is a natural phenomenon. However, on other (supernatural, religious, etc.) accounts, consciousness is not deemed to be natural at all.

All this brings on board the parallel problem of defining the word “natural” (at least within this limited context).

Naturalisation

So is it that consciousness must be naturalised in order for it to be a fit subject for science?

However, can consciousness be a natural phenomenon and still be a tricky thing to (fully) naturalise?

Indeed, isn’t it conceivable that a given natural phenomenon can be recalcitrant to naturalisation?

If one is a philosophical naturalist, then literally every thing is natural because there’s nothing else for it to be. Yet naturalising any given x may still be problematic or difficult.

Of course, all these statements and questions seem to be perfectly applicable to consciousness.

In any case, it will certainly be argued that consciousness isn’t like other natural phenomena (such as photosynthesis, combustion, cognition and even life itself). Indeed, even most naturalisers of consciousness will freely admit that consciousness isn’t really like other natural phenomena. However, and in many respects, no given natural phenomenon is like any other natural phenomenon. (Think here of an electron’s charge, and then compare that to the mating habits of a baboon.)

Consciousness most certainly does have distinct features…

Yet so too does every other natural phenomenon. (Now think of how high a flea can jump relative to its size, or consider superfluidity.)

So are the distinct characteristics of consciousness more distinct than all these other examples of (as it were) natural distinctness?

How on earth could a question like that be answered?

And isn’t it actually the case that we adult human beings take consciousness to be ultra-distinct and ultra-special simply because consciousness is very important to us? In addition, isn’t all this at least partly down to the fact that we have (at least on most accounts) first-person access to our own consciousness?

Consciousness and Science

It may not be that consciousness is unnatural, supernatural or even weird: it may simply be that it’s not amenable to the scientific methods scientists use for other natural phenomena.

Yet, here again, most natural phenomena aren’t analysed, tested or observed by the same scientific methods or in the same scientific ways either.

For example, what is done to discover the wave function of an electron is worlds away from what’s done to discover why birds flock together. Or, on a broader scale, the scientific methods and ways of biology and neuroscience are worlds away from the scientific methods and ways of quantum mechanics and sociology.

In any case, it’s probably the reality (or fact) of (the lack of) observation that clinches it when it comes to consciousness.

A human subject can’t observe another subject’s consciousness. Yet he can observe his own own consciousness. (There’s a problem here with the word “observe” when it comes to observing one’s own consciousness.)

Yet, here again, this human distinctness in deflated in the sense that physicists don’t actually observe electrons, quarks or fields either. Neither do scientists actually observe the Earth’s inner core or all the most distinct stars in the most distant galaxies. Less grandly, there are a whole host of natural phenomena in psychology, biology, chemistry, astrophysics, sociology, history, etc. which aren’t literally observed in any obvious — or literal — sense.

Moreover, without theory, such natural phenomena wouldn’t be the subject matter of these sciences at all. And, of course, the same may well be true of consciousness. Thus, the least we can say is that it’s observation + theory (or theory + observation) which accounts for nearly all the natural phenomena of the sciences — and that includes consciousness itself.

My flickr account.


Wednesday, 15 February 2023

Four A Priori Statements Taken To Be True Without Evidence, Observation, Etc.

(1) “Whatever is green is coloured.” (2) “Whatever is square is rectangular.” (3) “The sum of three and two is five.” (4) “Necessarily, if ‘p or q’ is true and q is false, then p is true.”

Intuitively, the statements directly above do seem to be known to be true without evidence, observation, experience, data, etc. That is, they seem to be a priori in nature.

However, readers will now need to know what the word “intuitively” (in this context at least) means and whether intuition is reliable at all.

More technically and relevantly, the very idea of a priori knowledge will need to be tackled.

The first thing to say about statements (1) to (4) is that even if they can be known to be true a priori, then they’re still very different from each other.

For a start, (1) and (2) seem to be conceptual statements — even if (2) is geometrical and (1) isn’t. (4) Is a logical statement in which the symbolic letters don’t count (i.e., their semantic content doesn’t need to be known). And (3) is an arithmetical (possibly metamathematical — see later section) statement.

What’s more, because we’ve brought on board conceptual, arithmetical, geometric and logical truths, can it still be argued that all these statements can be known to be true in exactly the same way?

(1) “Whatever is green is coloured.”

Two immediate things can be asked about this statement:

(1) Is it about the world? Or:
(2) Is it about the concepts [green] and [coloured]?

Two further statements (i.e., rather than questions) can now be made:

(1) Whatever is green (i.e., any green thing in the world), must also be coloured. 
(2) Contained in the concept [green] is the concept [coloured].

At first sight, the statement “Whatever is green is coloured” appears to be a purely conceptual truth. That is, if something is green, then it must be coloured. That’s simply because green is a colour.

If we taken on board (2) above, then, it can be said that if the concept [coloured] is contained in the concept [green], then we don’t need to (as it were) check the world to see if this statement is true. Thus, surely it’s an a priori truth.

That said, questions have been asked about the a priori status of this precise statement (i.e., “Whatever is green is coloured”), as well as about others like it (such as “Whatever is square is rectangular”, which will be discussed in a minute).

So what is it for one concept to be contained in another concept?

Surely the word “contained” is purely metaphorical. (Can it be anything else?)

The philosopher W.V.O. Quine famously denied that the statement “Whatever is green is coloured” (as well as others like it) is what is called analytic. Or, more correctly, Quine argued that the word “analytic” (or “analyticity”) was never satisfactorily explained (or defined) in ways which weren’t circular. (See Quine’s ‘Two Dogmas of Empiricism’.)

So do we require the notion of analyticity in order to claim that the statement “Whatever is green is coloured” is an a priori truth?

Well, now it must be explained what an analytic statement is.

It was the logical positivists who popularised the term analytic to explain a priori knowledge. So as the oft-used phrase has it:

Analytic statements are deemed to be (as Quine himself put it) “true by virtue of meanings and independently of fact”.

This position on analytic statements was also supposed to have the important consequence that even though an analytic statement is known to be true a priori, a special faculty of intuition isn’t actually required in order to know that. Instead, it is known to be true (at least partly) because the meanings of the words within the analytic statement are already understood.

Thus, there doesn’t seem to be anything intuitive about understanding the terms “green” and “coloured”. Consequently, there’s nothing intuitive about understanding the whole statement (i.e., “Whatever is green is coloured”) either . Such understanding is simply a consequence of understanding the meanings of the words and the nature of the grammatical (or syntactical) construction of the whole sentence.

Still, in our case, the statement “Whatever is green is coloured” does indeed seem to be true by virtue of meanings and independently of fact.

As already hinted at, it is evident that the person who knows that the statement “Whatever is green is coloured” is true must have already learned the English language and must therefore understand what the statement means. So he must also know the meanings of the words “green” and “coloured”, as well as the words “whatever” and “is”.

Of course, all apriorists happily accept all this.

Apriorists believe that a priori knowledge is about what occurs only after human beings have acquired a language, and then also understood the meanings of individual words and whole sentences.

This was (at least partly) summed up by Immanuel Kant (1724–1804). He wrote:

“Although all our knowledge begins with experience, it does not follow that it arises from experience.”

So, in simple terms, a priori knowledge is independent of current experience or any requirement for new (or further) knowledge. Yet a priori knowledge still isn’t independent of all experience.

So, having acquired the English language, etc., such a person doesn’t need to (as it were) go elsewhere to know that the statement “Whatever is green is coloured” is true.

On this account, then, acts of deduction, the recognition of tautologies, the truths of basic mathematical statements, etc. don’t require new knowledge or new experiences either.

More relevantly, all this also applies to statements (1) to (4) above.

(2) “Whatever is square is rectangular.”

This is similar to (1) above.

Just as it’s the case that if something is green, then it must also be coloured. Similarly, if something is square, then it must also be rectangular.

[The fact that all squares are rectangles, but not all rectangles are squares, doesn’t impact on this statement’s a priori status.]

It is impossible for x to be square and not be a rectangle.

So is this a truth about the world?

Yet, even if it is a truth about the world, then human beings have still created contingent natural-language terms to describe these aspects of the world. Humans have also made it a geometrical (even grammatical) rule that all squares must be classed as rectangular.

What’s more, and in a Platonist sense, there are no perfect squares in the world (or in nature). Thus, does that mean that such terms and rules are (as it were) abstractions and/or simplifications? And is it these factors which determine the a priori nature of the statement “Whatever is square is rectangular”?

These are complex metaphysical questions.

However, these questions don’t seem to impact on the a priori status of the statement, “Whatever is square is rectangular”. That seems to be because whatever the correct metaphysics of squares is, this statement can still be known to be true without consulting the world. That is, no reader of this statement (if he or she understands it) need actually check any squares (or what are taken to be squares) out there in the world. Thus, in that sense, the reader must assume (as it were) Platonic squares, as well as Platonic rectangles.

(3) “The sum of three and two is five.”

Interestingly, what’s being discussed here isn’t this example of arithmetic:

3 + 2 = 5

It’s this statement:

“The sum of three and two is five.”

This means that it’s not only the arithmetical equation embedded with the sentence (i.e., “three and two is five”) that’s known to be true a priori, but the entire sentence (i.e., “The sum of three and two is five”).

In any case, there are many philosophical takes on mathematical truth, and whichever one is adopted, it doesn’t seem to have any impact on the a priori status of this statement.

For example, say that one believes in the mathematical constructionist (i.e., rather than the narrower and more precise mathematical constructivist) position which has it that the statement “The sum of three and two is five” (or even the statement “3 + 2 = 5”) is taken to be true purely because of the meanings we give the words “two”, “three” “sum” and “equals” (or the meanings we give the symbols “2”, “3”, “+” and “=”), and the rules we’ve established (via convention) which make it the case the 5 is the sum of 3 and 2. Thus, this statement is still known to be true a priori regardless of one’s metaphysics. So even within a constructionist philosophy, there may still be a place for a priori knowledge — or even a priori truth.

Thus, whenever anyone who knows a little arithmetic reads this statement, he will know that it’s true. It doesn’t seem to matter that he may also believe that it’s true-by-convention (or due to “historical accident”): he still doesn’t need to consult the world to establish its truth (or, perhaps, only its correctness).

(4) “Necessarily, if ‘p or q’ is true and q is false, then p is true.”

This can be seen to be true even without knowing what the letters p and q stand for (or what their semantic content is). In fact, it doesn’t even matter what they p and q stand for, as long as they stand for propositions (or statements) which are taken to be either true or false.

Thus, the a priori reasoning here is purely logical.

That is, the statement “p or q” means that either p or q is true. That is, p and q aren’t both true. Similarly, it isn’t the case that both p and q are false.

Thus, since we’re then told that q is false, and either p or q must be true, then p must be true.

So what about the opening word “necessarily”?

The word “necessarily” is modal in nature. It tells us that the statement which follows it must be true. That is, there’s no possibility of it being false. It is, therefore, a tautology or a logical truth. In other words, the statement is true simply because of its logical form or grammar.

In terms of a priori knowledge, then, this statement can be known to be true simply by reading and understanding it. That is, the semantic content of the symbols p and q needn’t be known. More mundanely, one needn’t check the world, do research, find evidence, observe anything, read a history book, etc. in order to establish the truth (or falsehood) of this statement. However, (as argued earlier) one must have previously learned a little logic, the English language, etc. in order to understand the sentence which expresses the statement under consideration.

Yet, in a sense, all this also applies to the statement “Snow is white”.

After all, once one learns that snow is [always?] white, then when one comes across the statement “Snow is white”, we can say it is “true” without further research, finding evidence of new snowfalls, “checking the facts”, etc.

In that case, then, how does the statement “Snow is white” differ from the statement “Necessarily, if ‘p or q’ is true and q is false, then p is true”?

Well, the statement “Snow is white” isn’t logical. It’s empirical. It’s also contingently true, rather than necessarily true. In other words, it could have been the case that snow isn’t white. (Or could it?)

On the other hand, there is no situation in which the statement “Necessarily, if ‘p or q’ is true and q is false, then p is true” could be false. This statement, then, is necessarily true in that its negation would be self-contradictory. Or, in a different jargon, the logical statement is true at every possible world.

So although the statement “Snow is white” can be assented to without further evidence, it still required evidence to be established in the first place. Yet that doesn’t apply to the statement with p’s and q’s in it. That statement isn’t empirical in nature — even if the semantic content of both p and q can be taken to have empirical content.

My flickr account.