Saturday 3 August 2019

Philosophical Shorts (03/08/2019)



Speed-of-Light Travel and Aging

The most basic measure of time is entropy. The processes of entropy would be much slower at just below the speed of light. Therefore if Mr X is travelling at near the speed of light, then he would age much less quickly than his friends on earth. However, all his biological and physical processes would be slower too. Indeed if we could get our biological processes to slow down without travelling at just below the speed of light, then perhaps we would age much less quickly here on earth too.

Could bodies which have evolved with processes of a certain ‘speed’ cope with such changes? This would also involve the slowing down of our mental (neurological) processes. That is, we would be both mentally slower and physically slower – like human slugs! As a guess, I would say that the human body couldn't cope with such changes. That's unless technological processes which counteracted the negative side-effects (as it were) could also be created.

Is Consciousness Constituted by Physical Processes?

David Chalmers asks an interesting question:

Is consciousness constituted by physical processes, or does it merely arise from physical processes?”

The consensus position is that consciousness “arises” from physical processes. At first blush, saying that “consciousness is constituted by physical processes” appears to be a reductive position. That is, if x in “constituted by” A, B and C, then that must surely mean the following:

If x = A, B & C
then x can also be reduced to A, B & C.

Nonetheless, if one tackles this from another angle, consciousness can be constituted by physical processes even though it it isn't identical to those processes. Thus, x is constituted by A, B and C, though x is not identical to A, B and C.

Here is a simple example. A house in constituted by bricks and other material objects. Though a house is not identical to the bricks and material objects which constitute it.

This doesn't work for consciousness because there are things true of consciousness which aren't true of brain/physical processes. Having said that, there are things true of bricks and other objects (as well as the sum of bricks and other objects) which aren't true of house which was made out of them. Yet some would say that the house is nothing “over and above” the bricks and other objects. Yes, in physical terms that's correct. However, there are still things true of the house which aren't true of the objects which “constitute it” - and not even true of the sum of the objects which constitute it. So, even though the house is entirely made of bricks and other objects/materials, there are things true of the house which aren't true of the things which constitute it (whether taken individually or collectively).


Sunday 28 July 2019

David Chalmers on the Abstract-Concrete Interface in Artificial Intelligence


Word Count: 4218


i) Introduction

ii) Implementation
iii) Causal Structure
iv) The Computer's Innards
v) Chalmers' Biocentrism?
vi) The Chinese Room


It's a good thing that the abstract and the concrete (or abstract objects in "mathematical space" and the "real world") are brought together in David Chalmers' account of Strong AI. Often it's almost (or literally) as if AI theorists believe that (as it were) disembodied computations can themselves bring about mind or even consciousness. (The same can be said, though less strongly, about functions or functionalism.) This, as John Searle once stated1, is a kind of contemporary dualism in which abstract objects (computations/algorithms – the contemporary version of Descartes' mind-substance) bring about mind and consciousness on their own.

To capture the essence of what Chalmers is attempting to do we can quote his own words when he says that it's all about “relat[ing] the abstract and concrete domains”. And he gives a very concrete example of this.

Take a recipe for a meal. To Chalmers, this recipe is a “syntactic object[]”. However, the meal itself (as well as the cooking process) is an “implementation” that occurs in the “real world”.

So, with regard to Chalmers' own examples, we need to tie "Turing machines, Pascal programs, finite-state automata", etc. to"[c]ognitive systems in the real world" which are

"concrete objects, physically embodied and interacting causally with other objects in the physical world".

In the above passage, we also have what may be called an "externalist" as well as "embodiment" argument against AI abstractionism. That is, the creation of mind and consciousness is not only about the computer/robot itself: Chalmers' "physical world" is undoubtedly part of the picture too.

Of course most adherents of Strong AI would never deny that such abstract objects need to be implemented in the "physical world". It's just that the manner of that implementation (as well as the nature of the physical material which does that job) seems to be seen as almost – or literally – irrelevant.

Implementation

It's hard to even comprehend how someone could believe that a programme (alone) could be a candidate for possession of (or capable of bringing about) a conscious mind. (Perhaps no one does believe that.) In fact it's hard to comprehend what that could even mean. Having said that, when you read the AI literature, that appears to be exactly what some people believe. However, instead of the single word “programme”, AI theorists will also talk about “computations”, “algorithms”, “rules” and whatnot. But these additions still amount to the same thing – “abstract objects” bringing forth consciousness and mind.

So we surely need the (correct) implementation of such abstract objects.

It's here that Chalmers himself talks about implementations. Thus:

Implementations of programs, on the other hand, are concrete systems with causal dynamics, and are not purely syntactic. An implementation has causal heft in the real world, and it is in virtue of this causal heft that consciousness and intentionality arise.”

Then Chalmers delivers the clinching line:

“It is the program that is syntactic, it is the implementation that has semantic content.”

More clearly, a physical machine is deemed to belong to the semantic domain and a syntactic machine is deemed to be abstract. Thus a physical machine is said to provide a “semantic interpretation” of the abstract syntax.

Yet how can the semantic automatically arise from an implementation of that which is purely syntactic? Well, that depends. Firstly, it may not automatically arise. And, secondly, it may depend on the nature (as well as physical material) of the implementation.

So it's not only about implementation. It's also about the fact that any given implementation will have a certain “causal structure”. And only certain (physical) causal structures will (or could) bring forth mind.

Indeed, bearing all this in mind, the notion of implementation (at least until fleshed out) is either vague or all-encompassing. (For example, take the case of one language being translated into another language: that too is classed as an “implementation”.)

Thus the problem of implementation is engendered by this question:

How can something concrete implement something abstract?

Then we need to the precise tie between the abstract and the concrete. That, in turn, raises a further question:

Can't any arbitrary concrete/physical implementation of something abstract be seen as a satisfactory implementation?

In other words, does the physical implementation need to be an “isomorphic” (a word that Chalmers' uses) mirroring (or a precise “mapping”) of the abstract? And even if it is, how does that (in itself) bring about the semantics?

And it's here that we arrive at causal structure.

Causal Structure

One can see how vitally important causation is to Chalmers when he says that

"both computation and content should be dependent on the common notion of causation".

In other words, a computation and a given implementation will share a causal structure. Indeed Chalmers cites the example of a Turing machine by saying that

"we need only ensure that this formal structure is reflected in the causal structure of the implementation".

He continues:

"Certainly, when computer designers ensure that their machines implement the programs that they are supposed to, they do this by ensuring that the mechanisms have the right causal organization."

In addition, Chalmers tells us what a physical implementation is in the simplest possible terms. And, in his definition, he refers to "causal structure":

"A physical system implements a given computation when the causal structure of the physical system mirrors the formal structure of the computation."

Then Chalmers goes into more detail:

"A physical system implements a given computation when there exists a grouping of physical states of the system into state-types and a one-to-one mapping from formal states of the computation to physical state-types, such that formal states related by an abstract state-transition relation are mapped onto physical state-types related by a corresponding causal state-transition relation."

Despite all that, Chalmers says that all the above is "still a little vague". He does so because we need to "specify the class of computations in question".

Chalmers also stressed the notion of (correct) "mapping". And what's relevant about mapping is that the

"causal relations between physical state-types will precisely mirror the abstract relations between formal states".

Moreover:

"What is important is the overall form of the definition: in particular, the way it ensures that the formal state-transitional structure of the computation mirrors the causal state-transitional structure of the physical system."

Thus Chalmers stresses causal structure. More specifically, he gives the example of “computation mirroring the causal organisation of the brain”. Chalmers also states:

“While it may be plausible that static sets of abstract symbols do not have intrinsic semantic properties, it is much less clear that formally specified causal processes cannot support a mind”.

In Chalmers' account, the concrete does appear to be primary in that the "computational descriptions are applied to physical systems" because "they effectively provide a formal description of the systems' causal organisation". In other words, the computations don't come first and only then is there work done to see how they can be implemented.

So what is it, exactly, that's being described?

According to Chalmers, it's physical/concrete "causal organisation". And when described, it becomes an "abstract causal organisation". (That's if the word "causal" can be used at all in conjunction with the word "abstract".) However, it is abstract in the sense that all peripheral non-causal and non-functional aspects of the physical system are simply factored out. Thus all we have left is an abstract remainder. Nonetheless, it's still a physical/concrete system that provides the input (as it were) and an abstract causal organisation (captured computationally) that effectively becomes the output.

Chalmers also adds the philosophical notion of multiple realisability into his position on AI. And here too he focuses on causal organisation. His position amounts to saying that the causal organisation instantiated by System A can be “mirrored” by System B, “no matter what the implementation is made out of”. So, clearly, whereas previous philosophers stressed that functions can be multiply realised, Chalmers is doing the same with causal organisation (which, in certain ways at least, amounts to the same thing). Or in more precise terms: System B “will replicate any organisational invariants of the original system, but other properties will be lost”. Nonetheless, here again we have the mirroring of one system by another, rather than having any discovery of the fundamental factors which will (or can) give rise to mind and consciousness in any possible system.

The Computer's Innards

As hinted at, the most important and original aspect of Chalmers' take on Strong AI is his emphasis on the "rich causal dynamics of a computer". This is something that most philosophers of AI ignore. And even AI technologists seem to ignore it – at least when at their most abstract or philosophical.

Chalmers firstly states the problem. He writes:

"It is easy to think of a computer as simply an input-output device, with nothing in between except for some formal mathematical manipulations."

However:

"This was of looking at things... leaves out the key fact that there are rich causal dynamics inside a computer, just as there are in the brain."

So what sort of causal dynamics? Chalmers continues:

"Indeed, in an ordinary computer that implements a neuron-by-neuron simulation of my brain, there will be real causation going on between voltages in various circuits, precisely mirroring patterns of causation between the neurons."

In more detail:

"For each neuron, there will be a memory location that represents the neuron, and each of these locations will be physically realised in a voltage at some physical location. It is the causal patterns among these circuits, just as it is the causal patterns among the neurons in the brain, that are responsible for any conscious experience that arises."

And when Chalmers compares silicon chips to neurons, the result is very much like a structuralist position in the philosophy of physics.

Basically, entities/things (in this case, neurons and silicon chips) don't matter. What matters is “patterns of interaction”. These things create “causal patterns”. Thus neurons in the biological brain display certain causal interactions and interrelations. Silicon chips (suitably connected to each other) also display causal interactions and interrelations. So perhaps both these sets of causal interactions and interrelations can be (or may be) the same. That is, they can be structurally the same; though the physical material which bring them about are clearly different (i.e., one is biological the other is non-biological).

Though what if the material substrate does matter? If it does, then we'd need to know why it matters. And if it doesn't, then we'd also need to know why.

Biological matter is certainly very complex. Silicons chips (which have just been mentioned) aren't as complex. Remember here that we're matching individual silicon chips with individual neurons: not billions of silicon chips with the the billions of neurons of the entire brain. However, neurons, when taken individually, are also highly complicated. Individual silicon chips are much less so. However, all this rests on the assumption that complexity - or even maximal complexity - matters to this issue. Clearly in the thermostat case (as cited by Chalmers himself), complexity isn't a fundamental issue or problem. Simple things exhibit causal structures and causal processes; which, in turn, determine both information and - according to Chalmers - very simple phenomenal experience.

Chalmers' Biocentrism?

Nobel laureate Gerald Edelman once said that mind and consciousness

can only be understood from a biological standpoint, not through physics or computer science or other approaches that ignore the structure of the brain”.

Of course many workers in AI disagree with Edelman's position.

We also have John Searle's position, as expressed in the following:

"If mental operations consist of computational operations on formal symbols, it follows that they have no interesting connection with the brain, and the only connection would be that the brain just happens to be one of the indefinitely many types of machines capable of instantiating the program. This form of dualism is not the traditional Cartesian variety that claims that there are two sorts of substances, but it is Cartesian in the sense that it insists that what is specifically mental about the brain has no intrinsic connection with the actual properties of the brain. This underlying dualism is masked from us by the fact that AI literature contains frequent fulminations against 'dualism'.”

Searle is noting the radical disjunction created between the actual physical reality of biological brains and how these philosophers and scientists explain and account for mind, consciousness, cognition and understanding.

Despite all that, Searle doesn't believe that only biological brains can give rise to minds and consciousness. Searle's position is that only brains do give rise to minds and consciousness. He's emphasising an empirical fact; though he's not denying the logical and metaphysical possibility that other things can bring forth minds.

Thus wouldn't many AI philosophers and even workers in AI think that this replication of the causal patterns of the brain defeats the object of AI? After all, in a literal sense, if all we're doing is replicating the brain, then surely there's no strong AI in the first place. Yes, the perfect replication of the brain (to create an artificial brain) would be a mind-blowing achievement. However, it would still seem to go against the ethos of much AI in that many AI workers want to divorce themselves entirely from the biological brain. So if we simply replicate the biological brain, then we're still slaves to it. Nonetheless, what we would have here is something that's indeed thoroughly artificial; but which is not a true example of Strong AI.

Of course Chalmers himself is well aware of these kinds of rejoinder. He writes:

"Some may suppose that because my argument relies on duplicating the neural-level organisation of the brain, it establishes only a weak form of strong AI, one that is closely tied to biology. (In discussing what he calls the 'Brain Simulator' reply, Searle expresses surprise that a supporter of AI would give a reply that depends on the detailed simulation of human biology.) This would be to miss the force of the argument, however."

To be honest, I'm not sure if I understand Chalmers' reason for believing that other theorists have missed the force of his argument. He continues:

"The brain simulation program merely serves as the thin end of the wedge. Once we know that one program can give rise to a mind even when implemented Chinese-room style, the force of Searle's in-principle argument is entirely removed: we know that the demon and the paper in a Chinese room can indeed support an independent mind. The floodgates are then opened to a whole range of programs that might be candidates to generate conscious experience."

As I said, I'm not sure if I get Chalmers' point. He seems to be saying that even though one system or programme only "simulates" the biological brain, it's still, nonetheless, a successful simulation. And because it is a successful simulation, then other ways of creating conscious experience must/may be possible. That is, the computational program or system has only simulated the "abstract causal structure" of the brain (to use Chalmers' own terminology): it hasn't replicated biological brains in every respect or detail. And because it has gone beyond biological brains at least in this (limited) respect, then the "floodgates are [] opened to a whole range of programs" which may not be (mere) "Brain Simulators" or replicators.

Despite all the above, the very replication of the brain is problematic to start with. Take the words of Patricia Churchland:

“[I]t may be that if we had a complete cognitive neurobiology we would find that to build a computer with the same capacities as the human brain, we had to use as structural elements things that behaved very like neurons”.

Churchland continues by saying that

“the artificial units would have to have both action potentials and graded potentials, and a full repertoire of synaptic modifiability, dendritic growth, and so forth”.

It gets even less promising when Churchland adds:

[F]or all we know now, to mimic nervous plasticity efficiently, we might have to mimic very closely even certain subcellular structures.”

Readers may also have noted that Churchland was only talking about the biology of neurons, not the biology of the brain as a whole. However, wouldn't the replication of the brain (as a whole) make this whole artificial-mind endeavor even more complex and difficult?

In any case, Churchland sums up this immense problem by saying that

“we simply do not know at what level of organisation one can assume that the physical implementation can vary but the capacities will remain the same”.

That's an argument which says that it's wrong to accept the programme-implementation “binary opposition” in the first place. Though that's not to say - and Churchland doesn't say - that it's wrong to concentrate on functions, computation or cognition generally. It's just wrong to completely ignore the “physical implementation” side of things. Or, as Churchland says at the beginning of one paper, it's wrong to “ignore neuroscience” and focus entirely on function, compunctions, algorithms, etc.

The Chinese Room

In respect to the points just made, Chalmers also tackles the Chinese Room argument of John Searle.

So is the Chinese Room meant to be a replication or simulation (however abstract and minimalist) of the biological brain in the first place? If it isn't, then Chalmers immediately moving on to the Chinese Room almost seems like a non sequitur... except, as stated, Chalmers has said that once the Brain Simulator program has been successful in bringing forth a mind, then the "floodgates are open" – and that must include the well-known Chinese Room.

Chalmers gives us an example of what he calls “causal organisation” which relates to the Chinese Room.

Firstly, Chalmers states that he "supports the Systems reply", according to which

"the entire system understands Chinese even if the homunculus doing the simulating does not".

As many people know, in the Chinese Room there is a “demon” who deals with “slips of paper” with Chinese (or “formal”) symbols on them. He therefore indulges in “symbol manipulation”. Nonetheless, all this also involves an element of causal organisation; which hardly any philosophers seem to mention. Chalmers says that the slips of paper (which contain Chinese symbols) actually

“constitute a concrete dynamical system with a causal organisation that corresponds directly of the original brain”.

At first blush, it's difficult to comprehend what sort of causal organisation is being instantiated by the demon dealing with slips of paper with formal/Chinese symbols written on them. Sure, there are causal events/actions going on – but so what? How do they bring about mind or consciousness? Here again the answer seems to be that they do so because they are “mirroring” what Chalmers calls the “original brain”. But even here it's not directly obviously how a demon manipulating formal symbols in a room can mirror the brain in any concrete sense.

So we need Chalmers to help us here. He tells us that the “interesting causal dynamics are those which take place among the pieces of paper”. Why? Because they “correspond to the neurons in the original case”. The demon (who is stressed in other accounts of the Chinese Room scenario) is completely underplayed by Chalmers. What matters to him is that the demon “acts as a kind of causal facilitator”. That is, the demon isn't a homunculus (or the possessor of a mind) within the Chinese Room – or at least not a mind that is relevant to this actual scenario. (So why use a demon in this scenario in the first place?)

Nonetheless, it's still difficult to comprehend how the actions of a demon playing around with slips of paper (with formal/Chinese symbols on them) can mirror the human brain - or anything else for that matter. I suppose this is precisely where abstraction comes in. That is, a computational formalism captures the essence (or causally invariant nature) of what the demon is doing with those slips of papers and formal symbols.

Having said all the above, Chalmers does say that in the Systems reply "there is a symbol for every neuron"

This "mapping", at first sight, seems gross. What can we draw from the fact that there's a "symbol for every neuron"? It sounds odd. But, of course, this is a causal story. Thus:

"[T]he patterns of interaction between slips of paper bearing those symbols will mirror patterns of interaction between neurons in the brain."

This must mean that the demon's interpretation of the symbols on these slips of paper instantiate a causal structure which mirrors the "interaction between neurons in the brain". This is clearly a very tangential (or circuitous) mapping (or mirroring). Still, it may be possible... in principle. That is, because the demon interprets the symbols in such-and-such a way, then he also causally acts in such-and-such a wayAnd those demonic actions "mirror [the] patterns of interaction between neurons in the brain". Thus it doesn't matter if the demon "understands" the symbols, he still ties each symbol to other symbols on other slips of paper. And thus these symbols bring about the specific physical actions of the demon. To repeat: all this occurs despite the fact that the demon doesn't "understand what the symbols mean". However, he does know how to tie (or connect) certain symbols to certain other symbols.

And, despite all that detail, one may still wonder how this brings about consciousness or mind. That is, how does such circuitous (or tangential) mirroring (or mapping) bring about consciousness or mind? After all, any x can be mapped to any y. Indeed any x & y can be mapped to any other x & y. And so on. Yet Chalmers states that

"[i]t is precisely in virtue of this causal organization that the system possesses its mental properties".

Don't we have a loose kind of self-reference here? That is, a computational formalism captures the Chinese Room scenario – yet that scenario itself also includes formal symbols on slips of paper. So we have a symbolic formalism capturing the actions of a demon who is himself making sense (if without a semantics) of abstract symbols.

Another way of putting this is to ask what has happened to the formal symbols written on the slips of paper. Chalmers stresses the causal role of moving the slips of paper around. But what of the formal symbols (on those slips of paper) themselves? What role (causal or otherwise) are they playing? His position seems to be that it's all about the demon's physical/causal actions in response to the formal symbols he reads or interprets.

In any case, Chalmers stresses that it's not the demon who matters in the Chinese Room – or, at least, the demon is not the primary factor. What matters is the “causal dynamics in the [Chinese] room”. Those causal dynamics “precisely reflect the causal dynamics in the skull”. And, because of this, “it no longer seems so implausible to suppose that the system gives rise to experience”. Having said that, Chalmers does say that "e]ventually we arrive at a system where a single demon is responsible for maintaining the causal organization". So despite the demon being in charge (as it were) of causal events/actions in the Chinese Room, he still doesn't understand the formal symbols he's manipulating.

Another problem is that fact that although the demon may not understand the Chinese symbols, he still, nonetheless, has a mind and consciousness. And employing a conscious mind as part of this hypothetical Chinese room scenario (in order to bring about an artificial conscious mind) seems a little suspect.

To repeat: if one is precisely mirroring the brain in the Chinese Room (or the individuals of the population of China in another thought experiment from Ned Block), then do we have AI at all? In a sense we do in that such a system would still be artificial. But, in another sense, if it's simply a case of mirroring/mapping the biological brain, then perhaps we haven't moved that far at all.

*******************************************

Note:

1) Thus a detour towards John Searle may be helpful here.

Searle actually accuses those who accuse him of being a “dualist” of being, well, dualists.

His basic position on this is that if computationalists or functionalists, for example, dispute the physical and causal biology of brains and exclusively focus on syntax, computations and functions (i.e., the form/role/structure rather than the physical embodiment/implementation), then that will surely lead to a kind of dualism. In other words, there's a radical disjunction created here between the actual physical and causal reality of the brains and how these philosophers explain and account for intentionality, mind and consciousness.

Thus Searle's basic position on this is:

i) If Strong AI proponents, computationalists or functionalists, etc. ignore or play down the physical biology of brains; and, instead, focus exclusively on syntax, computations and functions (the form/role rather than the physical embodiment/implementation),
ii) then that will surely lead to some kind of dualism in which non-physical abstractions basically play the role of Descartes' non-physical and “non-extended” mind.

Searle is noting the radical disjunction created between the actual physical reality of biological brains and how these philosophers and scientists explain and account for mind, consciousness and understanding.









Monday 15 July 2019

David Chalmers' Fixation With Logical Possibility (2)



Contents:
  1. Logical Possibility and Natural Possibility
  2. Zombies
  3. Saul Kripke
  4. Chalmers & Goff: Conceivability to Possibility
  5. Two More Conceivings

*************************

conceive: to develop an idea; to form in the mind; to plan; to devise; to originate; to understand (someone).

conception: the act of conceiving.
The state of being conceived.
The power or faculty of apprehending of forming an idea in the mind; the power of recalling a past sensation or perception; the ability to form mental abstractions.
An image, idea, or notion formed in the mind; a concept, plan or design.

Logical Possibility and Natural Possibility

One may think that only natural (or empirical) possibility is of interest to most people – both laypersons and experts. Indeed David Chalmers himself (sort of) states that in the following:

It is logically possible that a plate may fly upward when one lets go of it in a vacuum on a planetary surface, but it is nevertheless empirically impossible. The laws of nature forbid it.”

Yet logical (i.e., not natural) possibility still permeates Chalmers' entire work. And, as we shall see, so too does conceivability; which he strongly ties to logical possibility.

Thus, with a concrete (as it were) example, Chalmers says:

The key question in this chapter is whether absent or inverted qualia are naturally or empirically possible.”

Indeed Chalmers goes further when he says that

establishing the logical possibility of absent qualia and inverted qualia falls far short of establishing their empirical possibility”.

He opposes this to logical possibility. However, as just stated, logical possibility also looms large in Chalmers' work. Indeed his references to natural possibility hardly make sense when taken out of the context of logical possibility. Thus both logical and natural possibility gain much of their purchase by being opposed to one another.

Chalmers himself sums up one major problem with logical possibility (this was touched upon in Part One) when he says that

[t]here are a vast number of logically possible situations that are not naturally possible”.

That means that there must be mightily good philosophical (or scientific) reasons to spend time on a given logical possibility. (It's easy to believe that there are good reasons when, for example, consciousness is being discussed.) That is, surely it must pay philosophical dividends to do so. Having said that, there's a very large number of uninstantiated natural possibilities too. Chalmers himself gives us an example when he tells us that

[i]t is even naturally possible (although wildly improbable) that a monkey could type Hamlet”.

So what, philosophically, can we draw out of that natural (“although wildly improbable”) possibility? Well, we can draw one thing out: that it's naturally possible. And that's enough for some philosophers. But what else? Well, at a superficial level, it shows us that all sorts of bizarre things are naturally possible. So if it's naturally possible that a monkey could type Hamlet, then it's also naturally possible that an ant could take over the world. Why? Because all it takes for something to be naturally possible is that it “conforms to the laws of our world”. So, as far as I can see, a Nietzschean super-ant does appear to conform to the laws of of our world. That is, this ant and its actions don't “violate[] the laws of nature of our world”.

So thank God there are limitations to logical possibility, at least according to Chalmers himself. For example, Chalmers writes:

God could not have created a world with male vixens, but he could have created a world with flying telephones.”

Is the above much of a limitation? Not really. That's because male vixens are conceptually impossible. In that basic sense, then, we aren't being told anything about the world – we're being told about (as it were) conceptual exclusion or conceptual necessity. So, in that basic sense, flying telephones are far more interesting than male vixens.

Zombies

Chalmers believes that zombies are worth discussing because “there seems to be no a priori contradiction in the idea” of zombies. There's also no a priori contradiction in a human being having 106 legs; though such a thing won't tell us much. So it's not just the bare possibility that zombies exist. It's that the possibility can tell us something about the world.

Thus we can conceive of a physical system that's note-for-note identical to us but which doesn't have consciousness. Such as system would therefore be a zombie. 

Alternatively, it may be a "zombie-invert" in that some of its experiences are inversions of those of human beings. The invert-zombie has the same nuts and bolts as us; though nevertheless it has different experiences. So the inverted zombie is still allowed his/its experiences.

There's also the “partial zombie” who also has experiences; though not as many as those of human beings. (Perhaps he/it can only feel pain.)

The point is that all these zombies are physically identical to us from the third-person point of view - and their behaviour will also be indistinguishable from us.

So what about their first-person point of view? “What is it like” to be a zombie of whatever kind? Well, there's nothing it's like to be a zombie! (Except in the partial and inverted zombie cases.)

On a larger scale. What about a physically identical universe which doesn't give rise to consciousness; though which does give rise to zombies? Can we say that such zombies are indeed "naturally possible"? However, according to our own laws of nature, they probably couldn't exist. That is, given identical physical and bodily facts, then such a universe couldn't help but give rise to consciousness.

Let’s take this further.

There could be an identical universe that didn't give birth to consciousness. If this were the case, then Chalmers concludes that consciousness must be something above and beyond the physical if such a counterfactual scenario is possible.

Chalmers himself argues that

if we can conceive of zombies in our world (or at other worlds),
then zombies are "metaphysically possible".

Chalmers supports his conceivability arguments by arguing thus:

If P & -Q is conceivable, [then] P & -Q is metaphysically possible [as well as being] supported by general reasoning.”

Is there such a link between conceivability and possibility? If so, what kind of link is it? In other words, just as there are arguments about certain claims being conceivable and therefore possible, is that link itself grounded in conceivability or possibility (or both)? What is the nature of the link between conceivability and possibility?

Chalmers codifies all this with a logical argument:

i) It is conceivable that P & not-Q.
ii) If it is conceivable that P & not-Q, then it is metaphysically possible that P and not-Q.
iii) If it is metaphysically possible that P & not-Q, then materialism is false.
iv) So materialism is false.

We can see Chalmers’ slide here from conceivability to metaphysical possibility. The position above can be summed up this way.

i) Can we conceive a round square? No.
ib) Then a round square isn't metaphysically possible.
ii) Can we conceive of a man with five legs? Yes.
iib) Then a man with five legs is metaphysically possible.

Again, what are we supposed to gain or achieve by saying that mile-high unicycles are conceivable and therefore possible? Where does it take us? Here:

Zombies are logically possible because they are conceivable.

Or contrawise:

If zombies are conceivable, then they are logically possible.

Saul Kripke

Kripke said that he was working with his own “Cartesian intuitions” when he tackled the mind-body problem. And many of those intuitions were about what is and what isn't logically possible. It's also fairly clear that Chalmers has Kripkean intuitions on the same subject.

Kripke is an interesting philosopher to bring into this debate because, prima facie, he seems to hold two mutually-contradictory positions on conceivability (or on the philosophical use of the imagination).

In the first instance, Kripke tell us about an act of imagination which misleads us (metaphysically speaking). He writes:

... we thought erroneously that we could imagine a situation in which heat was not the motion of molecules. Because although we can say that we pick out heat contingently by the contingent property that it affects us in such and such way...”

Thus the conceiver has conceived of the effects of the motion of molecules on bodies and the environment; though he hasn't conceived of heat actually being something other than the motion of molecules.

Let's say that heat is XYZ (i.e., something which isn't to do with molecular motion). What is it to conceive that heat is XYZ (i.e., something static) rather than molecular motion?

Imagination (or what we can conceive), on the other hand, can also tell us something important (as well as true) about the world. In Kripke's words:

[J]ust as it seems that the brain state could have existed without any pain, so it seems that the pain could have existed without the corresponding brain state.”

Kripke stresses our ability to imagine a pain state without its correlated brain state (formerly characterised as the “firing of C-fibres”). Thus Kripke concludes:

If we can imagine mental states without their correlated brain states, then such states are possible.

Or, alternatively, Kripke is saying that there's no necessary identity between mental states and brain states.

Kripke, on the other hand, again claims that those who imagine heat being caused by something that's not “molecular motion” aren't really imagining heat at all. They just think that they are because they've based their act of imagination on the contingent properties of heat – its affects on persons and the environment.

Chalmers & Goff: Conceivability to Possibility

Asadullah Ali Al-Andalusi makes a distinction between the words 'conceive' and 'imagine'. He states:

Let's not reduce my argument to only one of the terms I used: 'imagination'. I also used the word 'conceive'.”

The words 'conceive' and 'imagine' are not synonyms. However, everything that's can be said about imagination (at least in Andalusi's case) can also be applied to the word 'conceive'. Exactly the same problems arise.

Despite that, Al-Andalusi explains a distinction which can be made between conceiving and imagining. He continues:

Imagination is the the result of experiences and the minds ability to mold them into different forms or to conclude connections between them. It takes two to tango in this regard. Conception is more abstract and doesn't require external experiences at all.”

Nonetheless, imagination may still be required to juxtapose (or 'tango' with) one's 'conceptions'. Even if conceptions (does Andalusi mean concepts?) are abstract entities, it will still require the imagination to juxtapose or use them.

In any case, there are naturalist (as well as plain old empiricist) explanations as to why the mind is “capable of conceiving of possibilities that the external world does not offer through direct experience”. The thing is that the mind doesn't really move beyond experience in these instances (though it may in others). It simply plays with experiences and juxtaposes them to create something that doesn't itself exist in experience.

Chalmers himself says that

a claim is conceivable when it is not ruled out a priori”.

Put simply, there'll be an indefinite (infinite?) number of scenarios (or claims) which can't be “ruled out a priori”. Even the existence of shark with legs or mushrooms with a sense of humour can't be ruled out a priori. In other words, the only things which can be ruled out a priori are claims/scenarios which break known logical laws or which contain contradictions. Thus the conceivable universe (as it were) could be highly populated with strange and bizarre entities, conditions, events, etc.

Chalmers offers his own example of these conceivable. He says that it's “conceivable that there are mile-high unicycles”.

Philip Goff (when discussing Chalmers' position) expresses his view about the importance of conceivability in this way:

If P is conceivably true, then P is possibly true.”

This can also be expressed in possible-worlds terms thus:

If P is conceivably true (upon ideal reflection), then there is a possible world W, such that P is true at W considered as actual.”

Or, less technically, Goff says that

Chalmers holds that every conceivably true proposition corresponds in this way to some genuine possibility”.

Yet all the above seems to assume that there's a determinate and precise meaning of the words “conceivably" and "conceivably true”. There's also a problem with the phrase “upon ideal reflection”. Goff must be aware of all this because he also says that

conceivability entails possibility when you completely understand what you’re conceiving of”.

Goff puts the case for conceivability-leading-to-possibility more explicitly (i.e., less technically) when he states the following:

We could not coherently conceive of the seven bricks being piled on top of one another in the way that they are in the absence of the tower. In contrast, it is eminently possible to conceive of our seven subjects of experience experiencing the colours of the spectrum, existing in the absence of a subject of experience having an experience of white.

This may very well mean that the conceivability-to-possibility argument may not get off the ground (at least in some cases) because nothing is really conceived of in the first place – even in the case of “ideal reflection”.

Goff goes much further than this logical principle. Not only is the argument that the conceiving of P is a reason for believing that P is metaphysically possible, Goff also argues that it may be the case that “metaphysical possibility is just a special kind of conceivability”. Note the use of the “is of identity” here. We're told that metaphysical possibility is conceivability. Thus it's not just that our conceiving of P may - or does - give us one reason to believe that P is possible. The very conceiving of P seems to bring about the metaphysical possibility of P.

Despite all the above, Goff himself expresses the position that conceivability may not always give us metaphysical possibility. That is, even if we do allow various moves from conceivability to metaphysical possibility, sometimes what we think is metaphysically possible still remains unbelievable. Or as Goff himself puts it:

When metaphysical possibility is so radically divorced from conceptual coherence.... I start to lose my grip on what metaphysical possibility is supposed to be.”

It also seems that metaphysical possibility has moved beyond conceivability here – or at least beyond “conceptual coherence”. Thus that may mean that the move from conceivability to metaphysical possibility is sometimes illegitimate in the first place (Chalmers talks of ”misdescriptions"). That is, a specific conceiving may not warrant the metaphysical possibility which is derived from it. To stress that point, Goff also says that

a radical separation between what is conceivable and what is possible has the potential to make our knowledge of possibility problematic”.

However, doesn't Chalmers himself provide a very tight link between conceivability and metaphysical possibility? If that's the case, then how can there ever be a “radical separation” between the two? (As Descartes argued about "human reason" which is properly used.) Thus if that link were to be broken, would that be due to the fact that some conceivings aren't really genuine conceivings at all? Either that, or some links between conceivings and possibilities simply aren't tight enough. Alternatively, perhaps some moves from conceivings to possibilities (as already stated) are completely bogus from the very beginning.

Two More Conceivings


Chalmers offers us another example in which we conceive of water being XYZ.

What does it mean to “conceive” the statement “Water is XYZ”? Surely it's no use Chalmers going into to detail if this isn't established in the first place.

Is the statement “XYZ is water” conceivable simply because we're simply imagining “watery stuff” (i.e., Chalmers' “primary intension”)? But are we conceiving of water actually being XYZ (i.e., rather than simply conceiving of water stuff)? Isn't that something completely different? So here goes:

i) The first act of conceiving has to take on board what XYZ actually is. (Say, if it's meant to be some kind of fictional - though possible – molecule.)
ii) And then one needs to conceive of this XYZ molecule (or otherwise) actually being water.

But what, exactly is being conceived here?

Here, as elsewhere, Chalmers doesn't make a distinction between imagining x (or P) and conceiving x (or P) - even though other philosophers have made such a distinction (as seen above). To put it basically, some philosophers argue that conceivings don't depend on the formation of any mental images. Therefore conceivings can be seen as being a more sophisticated form of (as it were).... imagining. That is, of imagination without imagery (if that isn't an oxymoron).

A Million-sided Object

Let's go into more detail about the nature of conceiving with Goff's own example of a million-sided object.

In one sense it can be said that we can indeed conceive of such a thing. Or, more helpfully, if I ask the conceiver this question:

What do you conceive of when you conceive of a million-side object?

Then the conceiver can reply by saying:

I conceive of an object which has a million sides.

But what does that mean? What, exactly, is he conceiving of? Is he simply saying the following? -

i) A million-sided object has a million sides.
ii) Therefore I have just conceived of a million-sided object.

Doesn't he simply (analytically) know that if something has a million sides, then he's conceived of an object having a million sides? Though is that really a case of his conceiving of a million-sided object or is it a statement of some kind of tautology?

For a start, no one can picture (or imagine) a million-sided object. So what's left? Goff says that “the concept million-sided object is transparent”. That is,

it is a priori (for someone possessing the concept, and in virtue of possessing the concept) what it is for something to have a million sides”.

Goff's quote above is simply a rerun of what's already been said. That is:

(Q) What is it to conceive of something which has a million sides?
(A) It is to conceive of a million-sided object.

Here again, one simply restates the description of a fictional/possible object.

Perhaps my position is still too dependent on our contingent mental states and their content (e.g., mental images); whereas Goff's position may be strictly logical. Alternatively, perhaps Goff's position is strictly mathematical/geometrical (therefore abstract) in nature.

Thus perhaps it's an entirely logical and/or metaphysical point to state the following:

The concept [million-sided object] is conceivable and transparent.

But what does that claim amount to? Indeed how different is conceiving of a million-sided object to conceiving of a round square?

Nonetheless, it's certainly the case that a round square isn't in the same logical space as a million-sided object.

What would be easier to say is that a million-sided object could Рor even does - exist - if only as an abstract object; though it still can't be conceived of. It this case we can cite Ren̩ Descartes' example of a chiliagon. (I suspect that Goff had this in mind when he cited his own example.) This is a thousand-sided polygon.

The chiliagon is classed as a “well-defined concept” which, nonetheless, can't be imagined or visualised. Indeed, even if massive in size, it would still be visually indistinguishable from a circle. Thus it may even be the case that a chiliagon can't be conceived of either - even if we have a concept of it. Though that, again, depends on what's meant by the words “conceived of”. In any case, I would call a thousand-sided polygon a mathematical/geometrical abstract object; not a concrete object. In other words, it can't be found or even made. Nonetheless, that doesn't stop it from being a well-defined concept.

And even if the words “having a well-defined concept” and “conceiving of” are seen as virtual synonyms, it's still the case that both the layperson and the expert would need to conceive of (or have a well-defined concept of) what it is to be conceived of.

However, and as already stated, Goff believes that “the concept million-sided object is transparent”. Moreover,

when one conceives of a million-sided object one completely understands, or is in principle able to reason one’s way to a complete understanding of, the situation being conceived of”.

Goff goes further when he says that

it is a priori for the conceiver what it is for the state of affairs they are conceiving of [i.e., a million-sided object] to obtain”.

Thus we reach the important conclusion which Goff has been leading up to all along. Namely,

that we can move from the conceivability (upon ideal reflection) of the states of affairs so conceived, to its genuine possibility”.

Despite all that, isn't the following definition and critical account (i.e., rather than a mere concept) of a chiliagon what must be conceived of? -



The quote above says that the “regular chiliagon is not a constructible polygon”. Nonetheless, is it still conceivable?

***********************************