Saturday 13 July 2019

David Chalmers' Fixation With Logical Possibility (1)




i) Introduction
ii) Six Logical Possibilities Before Breakfast
iii) René Descartes
iv) Logical Possibilities as Tools

There isn’t much technical argumentation in the first part of this piece about David Chalmers’ “arguments from logical possibility”. I suspect that many of Chalmers’ logical possibilities are indeed logical possibilities. I also suspect that many of his arguments are airtight. Thus, bearing that in mind, it’s not surprising that the philosopher David Lewis wrote the following about Chalmers’ book The Conscious Mind:

“Legions of materialists are no doubt busy writing their rejoinders; but there will be few points left for them to make that Chalmers hasn’t made already. We of the materialist opposition cannot go on about how he has overlooked this and misunderstood that — because he hasn’t.”

Nonetheless, its worth stating here that this piece has virtually nothing to do with my own position on materialism. Instead, it’s about the philosophical status of logical possibilities — which an “anti-materialist” could (at least in theory) agree with.

I’ve just mentioned Chalmers’ book The Conscious Mind. Chalmers introduces a logical possibility on almost every page of that book. In it he talks about “an angel world”, “flying telephones”, “ectoplasm” and a monkey who writes Hamlet. Admittedly, most of these logical possibilities are of little interest to Chalmers. That’s either because they’re hugely improbable or because they don’t help him philosophically. It’s primarily (philosophical) zombies which he specialises in — and it is these logically-possible creatures which help him philosophically. Indeed isn’t the position of panpsychism (which Chalmers also — at least in part — endorses) itself very reliant on logical possibility? (For example, it’s logically possible that an electron or stone has “phenomenal properties”.)

Not only is Chalmers’ keenness on logical possibilities a philosophical position, logical possibilities also help him advance one of his positions within philosophy. To quote Chalmers himself:

“I am not the first to use the argument from logical possibility against materialism. Indeed, I think that in one form or another it is the fundamental anti-materialist argument in the philosophy of mind.”

Chalmers and other “anti-materialists” require logical possibilities in order to advance anti-materialism. In other words, their arguments only work if one countenances all sorts of strange logical possibilities. This isn’t to say that materialists (as well as other kinds of philosopher) don’t themselves rely on logical possibilities in various ways — though they don’t do so nearly so often as philosophers like Chalmers.

Chalmers is explicit about his philosophical use of logical possibilities when he says that

“the question is not whether it is plausible that zombies could exist in our world, or even whether the idea of a zombie is a natural one; the question is whether the notion of a zombie is conceptually coherent”.

Thus if state of affairs A (or proposition P) is “conceptually coherent”, then A (or P) is also logically possible. All this is tied together by these three statements:

1) That which is conceivable is therefore also logically possible
2) State of affairs
A (or proposition P) is conceivable because conceptually coherent
3) And state of affairs
A (or proposition P) is conceptually coherent because it is conceivable

So when a philosopher cites various logical possibilities, all sorts of philosophical positions become available… or possible. That’s primarily because once a given philosopher sets a particular logical possibility among the pigeons, then it’s surely the duty of other philosophers to tackle that logical possibility. And if they don’t, then surely they’re philosophical philistines… or “verificationists”. Indeed Chalmers lays his cards on the table when he states this:

“In general, a certain burden of proof lies on those who claim that a given description is logically impossible.”

This means that there’s a lot of weight attached to logical possibilities. The importance of logical possibilities is made clear when Chalmers cites one specific example:

“So even if a zombie world is conceivable only in the sense in which it conceivable that water is not H2O, that is enough to establish that consciousness cannot be reductively explained.”

In other words, logical possibilities (as well as — as it were — conceivables) serve a larger philosophical purpose.

My points above may seem too strong. Nonetheless, Chalmers did say that

“the argument from logical possibility [is] the fundamental anti-materialist argument in the philosophy of mind”.

In other words, citing logical possibilities isn’t just another tool — it is the tool to fight materialism (perhaps much more too). Yet Chalmers suggests that we shouldn’t get “too worried about odd things that happen in logically possible worlds”. However, he then immediately takes that back by saying that “there is room to be perturbed by what is going on”.

Six Logical Possibilities Before Each Breakfast

David Chalmers says that there are innumerable logical possibilities. And partly because of that, Bertrand Russell once wrote the following:

“No logical absurdity results from the hypothesis that the world consists of nothing but myself… and that everything else is mere fancy.”

Yet, despite writing the above, Russell went on to say that

“[b]ut although this is not logically impossible, there is no reason whatever to suppose that it is true; and it is, in fact, a less simple hypothesis, viewed as a means of accounting for the facts of our own life, than the commonsense hypothesis that there really are objects independent of us, whose action on us causes our sensations”.

So we need to be given philosophical reasons as to why we should consider these (or other) logical possibilities — otherwise philosophers may end up spending their entire lives considering every logical possibility they can’t (as the philosopher David Lewis put it) “properly ignore”.

So if I carry on with Russell’s theme: 

1) When I wake up tomorrow morning, it’s logically possible that I will still be asleep. 
2) If I do genuinely wake up tomorrow morning (how could I know?), then it will be
logically possible that the world’s entire population will be dead. 
3) And when I then move over to the tap, it will be
logically possible that poison — not water — could come out of it. 
4) If it were to be genuine water, then it will also be
logically possible that I could choke on that water. 
5) If I then look out of the window, it will
logically possible that the town I see is a simulation of what I saw the day before... And so on and so on and so on.

In other words, I could think of six logical possibilities before each breakfast.

As with Russell, should I say that “although [they are] not logically impossible, there is no reason whatever to suppose that [they are] true”?

Russell’s next point is that the logically-possible hypothesis is (often?) “less simple” than the everyday (or common sense) one. He told us that 

“it is, in fact, a less simple hypothesis, viewed as a means of accounting for the facts of our own life”. 

In my examples, this means that: 

1) It is a less simple hypothesis to believe that I’m currently dreaming. 
2) It is a less simple hypothesis to believe that the entire world population is now dead. 
3) It is a less simple hypothesis to believe that poison (rather than water) will come out of my tap. 
4) It is a less simple hypothesis to believe that the water I drink will choke me.
5) It is a less simple hypothesis to believe that my window-view is a mere simulation of the facts.

Indeed isn’t it also less simple to believe in Chalmers’ zombies? 

However, Chalmers doesn’t care about that because the logically possibility of zombies is a means to a philosophical end.

Despite that, Chalmers appears to question his own use of logical possibilities when he uses the word “plausible”. That is, an indefinite numbers of things are logically possible — but how plausible (as well as probable) are they? Thus all Chalmers’ own logical possibilities are “logically compatible with the data”. However, “this is not enough to make them plausible”. To be more precise, this talk of plausibility is set within the well-known context that

“[f]or any scientific theory one can easily construct as ad hoc hypothesis that is empirically equivalent”.

In other words, as with scientific theories, many of Chalmers’ logical possibilities are of the kind that there can be no empirical way of establishing either their truth or falsity. They’re literally beyond the empirical or observational. That’s primarily because possibilities aren’t even meant to be either true or false. Possibilities are simply meant to be… possible. Then again, logical possibilities could become true or false — it’s just that they aren’t true or false.

René Descartes

Much of this talk about logical possibilities goes back to Descartes and his own arguments from logical possibility; which have inspired Chalmers (even if he updates them) and many other contemporary philosophers. Chalmers writes:

“[S]ome will find the argument for dualism that I have given reminiscent of the argument given by Descartes. Descartes argued that he could imagine his mind existing separately from his body, so his mind could not be identical to his body.”

As with my questions about Chalmers’ conceivings (or conceivables): what, exactly, did Descartes imagine? Of course the question of the status of imaginability in philosophical reasoning is often asked. What isn’t often asked is what, exactly, is being imagined in the first place.

This may remind readers of Bishop Berkeley’s argument that those who imagine an event which occurs without observers are actually imagining what it would be like from the observer’s perspective (e.g. when a tree is falling down). That is, we imagine some kind of disembodied mind (if with sensory receptors) observing the event. (Of course the idea of a disembodied mind observing a tree falling down is ironic when set within the context of this discussion about Descartes’ focus on imaginability and his belief in mind-body dualism.)

So did Descartes really imagine a disembodied mind? If he did, then what form did it take? Can a disembodied mind experience anything or does it simply indulge in pure thought? What kind of pure thought would that be? (Descartes did exclude sensations and the senses from the mind — see here.)

In any case, Chalmers then goes on to say that

“[t]his sort of argument [i.e., Descartes’] is generally regarded to be flawed: just because one can imagine that A and B are not identical, it does not follow that A and B are not identical (think of the Morning Star and the Evening Star, for example)”.

Again it can be asked whether or not Descartes did imagine a disembodied mind. If he didn’t, then this question of identity between A and B doesn’t even arise. Chalmers then makes the obvious point:

“Might not my argument make a similar mistake? The zombie world only shows that it is conceivable that one might have a physical state without consciousness; it does not show that a physical state and consciousness are not identical.”

So Chalmers offers his own Cartesian variant.

As it is, I simply can’t make sense of Chalmers’ riposte to (as it were) naĂ¯ve Cartesianism. He writes:

“The form of the argument is not, ‘Once can imagine physical P.’ The form of the argument is rather, ‘One can imagine all the physical facts holding without the facts about consciousness holding, so the physical facts do not exhaust all the facts.’…”

To state the obvious, Chalmers is still relying on imagination here. In other words, is there really a big difference between

“One can imagine a physical P

and

“One can imagine all the physical facts holding without the facts about consciousness holding.”

It’s still imagination which holds the trump card in both cases. And it’s this reliance on imagination that’s being questioned in both cases. At first glance, the words “[o]ne can imagine all the physical facts holding without the facts about consciousness holding” hold no water. In fact I’ve a problem knowing what it even means. 

For example, if we invert the argument, what would we imagine if we imagined “all the physical facts holding” and consciousness holding too? How do you imagine the consciousness of another being or entity? Through its linguistic and physical behaviour? Well, Chalmers himself wouldn’t accept that because it would be what some people call “reductionist”. So, again, what would we be imagining?

This is also like the point made about monism and pluralism by A.J. Ayer. He said that if monism were true, then the world would be exactly the same as that experienced by a pluralist. Similarly, how would

imagining all the physical facts holding without consciousness

differ from

imagining all the physical facts holding with the addition of consciousness?

Logical Possibilities as Tools

Despite all the above, it’s almost as if (at least at times) Chalmers wants to get logical possibility out of the way in order to deal with what he calls “empirical possibility”. Indeed he is explicit about this when he writes the following:

“It is useful to think of a logically possible world as a world that it would have been in God’s power (hypothetically!) to create, had he so chosen.”

In that sense, logical possibilities play the role that possible worlds do for those who don’t believe in their actual existence as “concrete” entities. (As David Lewis did when he used phrases like “worlds like our own”.)

The question is, then, what purpose do these logical possibilities and possible worlds play if they aren’t actually “actual”, “concrete” or instantiated? In the possible worlds case, much has been written about their efficacy for creating a semantics of modal terms, statements and whatnot. So one would assume that something similar follows for Chalmers’ own logical possibilities (such as zombies, “dancing qualia”, etc.). So it’s useful that Chalmers himself discussed his position on possible worlds. This means that what he says may provide us with a way in for discussing his position on logical possibility. Chalmers writes:

“I will not engage the vexed question of the ontological status of these worlds, but simply take them for granted as a tool…”

So I can rewrite that passage in the following manner:

I will not engage the vexed question of the ontological status of these logical possibilities, but simply take them for granted as a tool.

David Lewis (as hinted at a moment ago) asked cogent questions about all our talk of possibilities… and necessities. That is, rather than stress the fact that we can conceive of possibilities, Lewis was concerned with what logical and ontological status these possibilities have. Like “Plato’s Beard”, the very fact that we can conceive of them must surely mean that they have some kind of ontological status. According to Ted Sider’s David Lewis,

“we do have a reason to believe in his possible worlds: only by believing in them can we demystify necessity and possibility”.

Sider also offers us a pragmatic take on possible worlds which can also be applied to logical possibilities. He tells us that “it is sometimes reasonable to postulate things for theoretical reasons”. For example, “no one has ever directly perceived an electron”; though physicists “postulate electrons to explain the results of the experiments they perform”.

Sider then goes on to say that certain philosophers believe that possibilities (such as unicorns or Tom Hanks being a serial killer) have some kind of being even though they aren’t actual. Are these possibilities what Sider calls “ghostly things”? Not really. That’s because ghosts are believed to exist by some people. They simply have the ghostly property of, say, intangibility. (In other words, ghosts aren’t meant to be abstract entities.) Or as Sider puts it:

“Rather than making it the case that unicorns are possible, the existence of a ghostly unicorn would just mean that ghostly things are actual.”

So how should we see Chalmers’ logical possibilities within the context just discussed? In Sider’s words, “if possibilities are not ghostly entities, then what are they?”. If possibilities, such as Tom Hanks being a serial killer, are not ghostlike or anything else, then what are we talking about when we talk about them? What is Tom Hanks the serial killer, Pegasus or even the round square? 

Indeed what philosophical and actual status do Chalmers’ zombies have?


***************************

To follow: 'David Chalmers' Fixation With Logical Possibility (2)'

Contents:

  1. Logical Possibility and Natural Possibility
  2. Zombies
  3. Saul Kripke
  4. Chalmers & Goff: Conceivability to Possibility
  5. Two More Conceivings




Sunday 9 June 2019

Murray Gell-Mann on Complexity (2)



i) Introduction
ii) Additional Information
iii) Simplicity and Complexity
iv) Complexity ≠ (Strong) Emergence
v) The Autonomy of Higher Levels

vi) Higher-level Laws

[The short biographical introduction which opens this piece is a copy-and-paste from Part (1) – 'Murray Gell-Mann on Reductionism'.]

Murray Gell-Mann died on the 24th of May, 2019.

In 1964 Gell-Mann postulated the existence of quarks. (The name was coined by Gell-Mann himself and it's a reference to the novel Finnegans Wake, by James Joyce.) Quarks, antiquarks and gluons were seen to be the underlying elementary elements of neutrons and protons (as well as other hadrons). Gell-Mann was then awarded a Nobel Prize in Physics in 1969 for his contributions and discoveries in the classification of elementary particles at the nuclear level.

More relevantly to this piece. In 1984 Gell-Mann was one of several co-founders of the Santa Fe Institute - a research institute in New Mexico. Its job is to study complex systems and advance the cause of interdisciplinary studies of complexity theory.

Gell-Mann wrote a popular science book about physics and complexity science, The Quark and the Jaguar: Adventures in the Simple and the Complex, in 1994. Many of the quotes in this piece come from that book.

**************************************

The following words of Lee Smolin (an American theoretical physicist) sum up both Murray Gell-Mann's work and the man himself. (At least as they are relevant to this piece.) Firstly he explains Gell-Mann's work:

[P]hysics needs a new direction, and the direction should have something to do with the study of complex systems rather than with the kind of physics [Murray Gell-Mann] did most of his life.”

Then Smolin continues with a few words on Gell-Mann himself:

The fact that after spending a life focused on studying the most elementary things in nature Murray can turn around and say that now what's important is the study of complex systems is a great inspiration, and also a great tribute to him.”

Of course all the above is hardly a philosophical or scientific acount of the need to move from the “elementary” to the “complex”. However, it does hint at the importance of gaining a broader picture of nature (or the universe). And that's what both Smolin himself and Gell-Mann realised. (In Smolin's own case, he moved from theoretical physics to adding cosmology and philosophy to his repertoire.)

Despite that, surely it can't be said that “what's important is the study of complex systems”. That's simply to reverse the “reductionist hierarchy”. Complex systems are simply part of the picture: not the most important part of the picture. Indeed it seems a little naĂ¯ve to reverse that previous ostensible hierarchy with a new one. 

(Lee Smolin's take on Gell-Mann isn't surprising when one bears in mind the fact that he advances the philosophical position called relationalism.)

Murray Gell-Mann himself did appear to offer us a middle-way between (strong) reductionism and the complete autonomy of the individual (“special”) sciences.

Gell-Mann believed that it's all about what he called the staircases” between the sciences. As Gell-Mann put it (in the specific case of the relation between the levels of psychology and biology):

Many people believe, as I do that when staircases are constructed between psychology and biology; the best strategy is to work from the top down as well as from the bottom up.”

What's more:

Where work does proceed on both biology and psychology and on building staircases from both ends, the emphasis at the biological end is on the brain (as well as die test of the nervous system, the endocrine system, etc), while at the psychological end the emphasis is on the mind—that is, the phenomenological manifestations of what the brain and related organs are doing. Each staircase is a brain-mind bridge.”

Interestingly enough, a man who has often been accused of “reductionism” (the American biologist and naturalist E.O Wilson) expressed a similar view in the following:

Major science always deals with reduction and resynthesis of complex systems, across two or three levels of complexity at a step. For example, from quantum physics to the principles of atomic physics, thence reagent chemistry, macromolecular chemistry, molecular biology, and so on – comprising, in general, complexity and reduction, and reduction to resynthesis of complexity, in repeated sweeps.”

So instead of Gell-Mann's simplicity and complexity, in this case we have the “reduction” and “resynthesis” of complex systems in “repeated sweeps”.

In addition, the philosopher Patricia Churchland (who classes herself as a “reductionist”) also advances a position which is similar to Gell-Mann's. In her case, she confronts the neuroscience-versus-psychology debate. And, in so doing, she mollifies psychologists about that scareword “reductionism” by saying that the

reductionist research strategy does not mean that there is something disreputable, unscientific or otherwise unsavoury about high-level descriptions or capacities per se”.

The words above can be summed up in this way:

i) Simply because a scientist (or philosopher) says that x can be reduced to y (not necessarily without remainder),
ii) that certainly doesn't also mean that this scientist (or philosopher) also believes that x is (to use Churchland's words) “disreputable, unscientific or otherwise unsavoury”.

Then Patricia Churchland goes on to say something that may surprise some philosophers. She argues that reductionism can exist side-by-side with what she calls “high-level descriptions or capacities”. This too perfectly expresses Gell-Mann's own position (as we'll see).

To return to E.O. Wilson, he was also well aware that the “very word 'reductionism'” has a “sterile and invasive ring, like a scalpel or catheter”. He went on to say that the

[c]ritics of science sometimes portray reductionism as an obsessional disorder, declining towards a terminal stage one writer recently dubbed 'reductive megalomania'”.

Additional Information

A seemingly extreme reductionist position is actually put by Gell-Mann himself (in relation to the domain of biochemistry). He writes:

The proponents of this view are saying in effect that going from the fundamental laws to the laws of biochemistry involves almost no new information, and thus contributes very little effective complexity.”

Here the case for complete reduction is expressed in terms of information content. Or, as Gell-Mann puts it, “going from the fundamental laws to the laws of biochemistry involves almost no new information”. Nonetheless, here we also have “complexity” alongside possible reduction. That is,

a computer might have to do a great deal of calculating to derive the near-uniqueness of biochemistry as a theoretical proposition from die fundamental laws of physics”.

That biochemical complexity is compounded by the important fact that it “also depends in an important way on history”. (Gell-Mann mentions “additional information” and “history” many times.) For example:

The laws of biology do depend on the laws of physics and chemistry, but they also depend on a vast amount of additional information about how those accidents turned out.”

However, that seems like the problem of complete knowledge, rather than an argument against reduction... Unless, of course, a lack of complete knowledge rules out reduction. But that would still mean that a reduction is possible... “in principle”. After all, that additional information may be entirely peripheral or irrelevant. (This is in the sense that if an investigating officer were to ask about the killing of a person, telling him about the colour of the neon signs above the dead body wouldn't help.)

So “accidents” and “additional information” have always been known or accepted by physicists – even by “reductionists”. It's just that they factored out their relevance. Were they right to do so? (Lee Smolin – with his “physics in a box” - and the philosopher Nancy Cartwright question all of this.) How would any physicist or scientist have ever denied that they were factoring out additional or extraneous information? Of course they knew that such information existed. The problem is that taking on board everything in every experimental (or scientific) situation is impossible – and even complexity theorists and “holists” must accept this. (That's unless they hold the position of Absolute Idealism like F.D. Bradley's; or one of extreme holism.)

Simplicity and Complexity

Gell-Mann gave two interesting examples of the opposite of complexity. Firstly, he wrote:

[In] the environment in question is the center of the sun, at a temperature of tens of millions of degrees, there is almost total randomness, nearly maximal algorithmic information content, and no room for effective complexity or great depth—nothing like life can exist.”

It's interesting that Gell-Mann lumps simplicity and “randomness” together at “the center of the sun”. By “nearly maximal algorithmic informational content” I take Gell-Mann to mean that in order to fully account for this “information content”, that content would simply need to be replicated in its entirety. Gell-Mann himself puts this case elsewhere. He talks about a “bit strings” and says that “it can shown mathematically that most bit strings of a given length are incompressible”. In more detail:

In other words, the shortest program that will produce one of those strings (and then have the computer stop) is one that says PRINT followed by the string itself.... It is called a 'random' string precisely because it contains no regularity that will permit it to be compressed.”

In any case, Gell-Mann then jumps to the other extreme:

Nor can there be such a thing as life if the environment is a perfect crystal at a temperature of absolute zero, with almost no algorithmic information content and again no room for much effective complexity or great depth.”

Here again randomness or lack of “algorithmic information” rules out complexity.

Complexity ≠ (Strong) Emergence

What stops the reduction of the whole of chemistry (for example) to physics can be summed up with one single word: complexity. This is how Gell-Mann puts it:

In practice, even with the aid of the largest and fastest computers available today, only the simplest chemical problems are amenable to actual calculation from basic physical theory. The number of such amenable problems is growing, but most situations in chemistry are still described using concepts and formulae at the level of chemistry rather than that of physics.”

These concessions don't rule out reduction per se. That is, there's no strong emergence being hinted at here. All that's being admitted to is that some chemical phenomena are so complex that it would be impossible to get all the information required to reduce a given chemical x to a physical y. That's not say that the chemical x doesn't reduce to the physical y. It's simply to say that the reduction hasn't been done... Indeed perhaps it can't be done. But even here there's still no strong emergence. The only thing that's accepted is complexity.

And because of that complexity, Gell-Mann goes on to say that

[i]n general, scientists are accustomed to developing theories that describe observational results in a particular field without deriving them from die theories of a more fundamental field”.

The very fact that observation is being stressed (elsewhere Gell-Mann also uses the word “phenomenological”) shows that the micro level is being automatically ruled out (as it were). It's also odd (bearing in mind traditional biases in physics - right up to the birth of quantum mechanics) that observation is being stressed at all.

In any case, here again reduction hasn't been rejected in principle, and that's because

[s]uch a derivation, though possible in principle when the additional special information is supplied, is at any given time difficult or impossible in practice for most cases”

So reduction is trumped by complexity and/or by the contingencies of scientific “practice”.

Gell-Mann then gives us a specific example of complexity trumping (strong) reduction. He writes:

[C]hemists are concerned with different kinds of chemical bonds between atoms (including the bond between the two hydrogen atoms in a hydrogen molecule). In the course of their experience, they have developed numerous practical ideas about chemical bonds that enable them to predict the behavior of chemical reactions.”

Despite all that, physicists and “theoretical chemists” are still hanging around on the sidelines. That is, “theoretical chemists endeavor to derive those ideas, as much as they can, from approximations to QED”. However,

[i]n all but the simplest cases they are only partially successful, but they don't doubt that in principle, given sufficiently powerful took for calculation, they could succeed with high accuracy”.

Gell-Mann goes into more detail in the following:

In very simple cases, an approximation to QED is used to predict directly the results at the chemical level. In most cases, however, lam are developed at the upper level (chemistry) to explain and predict phenomena at that level, and attempts are then made to derive those laws, as much as possible, from the lower level (QED). Science is pursued at both levels and in addition efforts are made to construct staircases (or bridges) between them.”

Here again complexity trumps (full/strong) reduction.

It's also worth stressing “causal” dependency here, rather than reduction. That is, one can stress causal dependency without demanding any kind of complete reduction.

That is, x can physically entail y (x can be a set of conditions, properties, etc.); and yet y will still not be entirely accounted for by x.

The Autonomy of Higher Levels?

Gell-Mann also puts the case for what E.O Wilson has called “concilience” (which doesn't rule out either reduction or reductionism) in the following words:

One lesson to be learned from all this is that, while the various sciences do occupy different levels, they form part of a single connected structure. The unity of that structure is cemented by the relations among the parts. A science at a given level encompasses the laws of a less fundamental science at a level above.”

If we have “a single connected structure”, then it's difficult to see how we can also have autonomy when its comes to the special sciences or to higher-level descriptions/laws. (Isn't this why the philosopher Jerry Fodor advanced what he called “strong autononomy”?)

There also seems to be a commitment to at least some kind of reductionism here. How else can we interpret the following words? -

[A] science at a given level encompasses the laws of a less fundamental science at a level above.”

Though it depends on how we interpret the words “reductionism” and “encompasses”. Nonetheless, Gell-Mann's words can be seen to be in favour of specific reductions; though not in favour of the philosophical standpoint of reductionism itself.

And then complexity does indeed raise its head:

But the latter [e.g. chemistry], being more special, requires further information in addition to the laws of the former.”

Here we need to know what the words “further information” mean because, clearly, that further information may not block (or rule out) reduction or even reductionism itself.

Yet despite Gell-Mann's acceptance of the autonomy of different scientific disciplines, it may seem strange that he should also argue (of psychology) that it is “not yet sufficiently scientific”. What's more, he argues that his

preference would be to take [them] up in order to participate in the form of making them more scientific”.

The words above can be read in two ways:

i) The “special sciences” aren't autonomous.
ii) In order to make the special sciences autonomous, we would need to “make them more scientific”.

Of course I'll now need to explain why I'm using the word “autonomy” here.

I do so because, for example, the theoretical physicist Sean Carroll often stresses the autonomy of the special sciences and higher-level descriptions. (The philosopher Jerry Fodor also stressed what he called “strong autonomy”.) Indeed Carroll's advances the “autonomy” of what he calls "emergent theories". (This is a vital part of his “poetic naturalism”.) Carroll writes:

The emergent theory is autonomous... it works by itself, without reference to other theories...”

Elsewhere, Carroll says that with strong emergence “all stories are autonomous, even incompatible”. Yet, in other places, Carroll also stresses emergent theories and their compatibility with fundamental theories. Indeed Carroll hints at a lack of complete autonomy when he says that “we might learn a little bit about higher levels by studying lower ones”.

Carroll also emphasises the “mapping” of a fundamental theory onto an emergent theory in a process called “course-graining”. So how can we have mapping as well as autonomy?

In a seminar, Carroll also used the word “consistence” in reference to the fit between emergent and more basic theories. How can that consistency - between two very different autonomous theories - be established? Carroll also assumes compatibility (clearly related to consistency) between emergent theories and more fundamental (basic) theories. In addition, Carroll says that (some) emergent theories are accurate... Who says so? Does Carroll simply assume an accuracy that's tacitly and essentially guaranteed by a more fundamental (or basic) theory - thus limiting the emergent theory's supposed autonomy?

In opposition to Sean Carroll, it seems that Gell-Mann didn't believe in this (complete) autonomy. That's because he believed in both a “bottom-up method of building staircases between disciplines” and a “top-down approach” as well. Yet if the higher-level disciplines were autonomous, then why would they require either a “top-down” or a “bottom-up” method? Surely they could stand on their own feet. Indeed the fact that Gell-Mann even raises the question of both bottom-up and top-down approaches (or methods) means that he did indeed have a “bias in what invites the charge of 'reductionist'”. In other words, because Gell-Mann didn't believe in the (complete) autonomy of the special sciences, he could be classed as a “reductionist” - as he says himself. A non-reductionist, on the other hand, would say that the special sciences are autonomous and don't need to account for themselves (at least not via lower-level disciplines).

Higher-level Laws

Despite Gell-Mann's “reductionist” inclinations, he still believed that there are new scientific laws at higher levels. He wrote:

At each level there are laws to be discovered, important in their own right. The enterprise of science involves investigating those laws at all levels, while also working, from the top down and from the bottom up, to build staircases between them.”

And what Gell-Mann said of chemistry, he also said of biology. That is,

like chemistry, biology is very much worth studying on its own terms and at its own level, even as the work of staircase construction goes on. “

Gell-Mann also cited psychology and even the social sciences and history. In full, he wrote:

[I]t's essential to study biology at its own level, and likewise psychology, the social sciences, history, and so forth, because at each level you identify appropriate laws that apply at that level.”

However, Gell-Mann did qualify all this by basically saying that higher-level laws are dependent on lower-level laws - “plus a lot of additional information”! However, that dependency doesn't in and of itself mean that completely new laws don't exist at higher levels. Nonetheless, all that additional information may not be lawlike – at least not as yet.

Gell-Mann continues by talking about “staircases” again. However, at a prima facie level, none of his talk about staircases rules out reduction; or, more specifically, it doesn't rule out the reduction of higher-level laws to lower-level laws. So it may be a little surprising that Gell-Mann finishes off by saying that

[a]ll of these ideas belong to what I call the doctrine of 'emergence'”.

Here all that can be said is that Gell-Mann is stressing weak (rather than strong) emergence. And, according to the philosopher Mark A. Bedau, “the notion of weak emergence is metaphysically benign”. Strong emergence, on the other hand, certainly is not.