Tuesday 13 August 2019

Neuropsychologist Nicholas Humphrey vs. Philosopher Daniel Dennett


Word Count: 670

It can be argued that the exact antithesis of Daniel Dennett's position (on consciousness) is put forward by the neuropsychologist Nicholas Humphrey. This is graphically shown in Humphrey's phrase “the experience of raw being”. That is, a level of consciousness that's worlds away from Dennett's propositional or functional consciousness. In addition, it's a “lower-level” of consciousness that's “unreflected on”. In more detail:

“[P]rimitive sensations of light, cold, smell, taste, touch, pain; a the is-ness, the present tense of sensory experience, which doesn't require any further analysis or introspective awareness to be there for us but is just as ? Of existence.”

If such a state, what would Dennett have to hold onto? Where's the third-person heterophenomenology in all of this? Where are the judgments or the “verbal reports”?

It doesn't help that Humphrey uses terms like “the is-ness” and the like. The problem, then, is that Humphrey's account may be seen like a position on some kind of "spiritual" or Zen-like states. However, I believe that Humphrey hasn't all this in mind at all. These aren't Zen-like or spiritual states of consciousness he's referring to. They're states that most of us experience at certain points each and every day. And neither are these “raw” states the end result of mental preparations and will power; as spiritual states often are. I would argue that they are the givens of everyday experience or consciousness.

In any case, Dennett would surely reject these conscious states. He'd be an eliminativist and a verificationist about them. They are, after all, private (if in a very rudimentary sense). Still, these raw states don't necessarily fall foul of the strictures of behaviourists, Wittgenstein, Quine, functionalists and the rest. That's because nothing much is being claimed for them. They aren't the “source of meaning” (which Wittgenstein and Quine rejected). They don't constitute a "private language" of any kind. And neither do they constitute first-person “infallible knowledge”. They're simply basic experiences. Therefore, almost by definition, they're outside of science and Dennett's own philosophical heterophenomenology and verificationism.

Basically, one can admit that they serve no purpose... as such. Nonetheless, they do serve a purpose for Humphrey himself in that they show him

that is what's it's like to be me, or what it's like to be a dog, or what it's like to be a baby”.

So we don't need to build a philosophical edifice on top of Humphrey's “raw being” or “is-ness”; as Descartes, the phenomenologists and others did. We simply need to accept that these states are part of consciousness. And they're essentially above and beyond all that's functional, cognitive, judgmental, propositional and the like.

Earlier the lack of purpose for these basic conscious states was mentioned. And that's why Dennett said that following to Humphrey:

Look, I hear what you're saying, but I simply don't have any reference point for it. Your raw sensations, if they exist, leave nothing behind. They might as well never have occurred.”

So as was said earlier:

i) If the conscious states Humphrey refers to have no “reference point”, then they serve no purpose.
ii) If they “leave nothing behind”, then they serve no purpose.
iii) And if they “might as well never have occurred”, then they serve no purpose.

In other words, to Dennett, all this basically means (surely) that even if these consciousness states serve no purpose, then they don't/mustn't exist. That really does seem to be Dennett's position. And it is pure verificationism and indeed eliminativism at one and the same time.

This conclusion, therefore, is completely at one with Humphrey when he responds to Dennett by saying that

[f]or Dan, the basic constituents of consciousness are ideas, judgements, propositions, and so on”.


For Dan, if there's nothing left after the sensation has passed – nothing in the way of a text, which says something like, 'Memo to self: have just had a sensation' – then it didn't happen.”

Dennett's position is extreme. (At least it is to me.) What's more, I can't help believing that it's obviously false.




Monday 12 August 2019

John Searle on Brain-Consciousness Causation

Searle argues that it’s not that “brain processes cause consciousness”. It’s that “consciousness is itself a feature of the brain”. A loose analogy is a table’s weight causing an indented rug. The table’s weight on the rug and the indentation occur at one and the same time.

Daniel Dennett (left) and John Searle

The American philosopher Daniel Dennett (1942-) once claimed that John Searle’s position is that the brain “secretes” consciousness. I suspect that Searle would argue that consciousness can’t be secreted out of anything — even out of the brain. That’s because Searle sees consciousness as a higher-level attribute of the brain — yes, of the brain. This means that the idea of any secretion of consciousness out of a brain it is already part of doesn’t really make sense.

[It seems that someone — perhaps Dennett? — once said to Searle: “It sounds like you’re saying the brain ‘secretes’ consciousness the way the stomach secretes acid or the liver.” Searle replied: “I have never claimed that [].”]

So what about the word “cause”?

If it can be stressed (as Searle does) that consciousness is a higher-level feature of the brain, then how can the brain cause consciousness?

The problem here, according to Searle, is that many people have a misconception about causation —at least when it comes to the brain causing consciousness.

The most important point which Searle makes (in this discussion of consciousness) is that causation needn’t be seen in the following temporal terms:

a cause C followed (in time) by effect E

Here we have two things: a cause followed by an effect. That is, the cause occurs at time t¹ and the effect occurs at tⁿ. This surely implies a kind of dualism for the simple reason that the brain’s cause-events occur before the effect-event that is consciousness (or a conscious state). Thus these two things (i.e., brain processes and consciousness) must be separated in that the former causes the latter. However, what if we can have processes which don’t entail a cause followed by an effect? That is, in which the processes of consciousness occur at the very same time as the brain processes.

Searle gives an example of a heavy table and the impression it makes on a rug.

This isn’t a question of the following:

the table-pressure-event (or state) causing the indented-rug-event (or state).

The weight of the table and the indented rug occur at one and the same time. Yet this is still a casual process. However, it’s not a case of a cause followed by an effect. It’s a causal process of the weight and the indentation occurring at one and the same time.

As Searle puts it: the gravitational force of the table shouldn’t be taken as an event which occurs before the indentation of the rug which is under it. Moreover, Searle states that “gravity is not an event” at all — it’s a force which is always there.

Searle’s next example is about tables and their solidity.

It can be said that the density (or nature) of the table’s molecules doesn’t cause the “solidity of the table”. That is, we don’t have a cause (i.e., the molecules and their behaviour) followed by an effect (i.e., the solidity of the table). Instead, whenever there are certain kinds of molecules of this specific density and structure, then we’ll also have a solid table. Such molecules don’t come first and then cause the solidity of the table. That last possibility would mean that there was a point in which we had that very same table (made up of the same molecules and the same structure); but in which the table wasn’t solid (say, it was fluid or floppy). Instead, as soon as we have that configuration and that set of molecules, we also have the table’s solidity. The one doesn’t come before the other.

That said, it’s still the causal processes of the molecules which are responsible for the solidity of the table. That is, we still have causation and causal processes. It’s just that we don’t have a cause followed by an effect.

One can see where Searle’s line of argument is going here.

It can now be argued that we shouldn’t see the brain’s processes (or states) as causes which bring about the processes (or states) of consciousness. Instead, we have brain processes (or states) and consciousness at one and the same time. That is, the brain’s processes (or states) don’t come before the processes (or states) of consciousness (or before consciousness itself). Both occur together.

Searle himself writes:

“Lower-level processes in the brain cause my present state of consciousness, but that state is not a separate entity from my brain; rather it is just a feature of my brain at the present time. [It’s not] that brain processes cause consciousness but that consciousness is itself a feature of the brain [].”

It can still be argued that the “lower-level processes in the brain cause my present state of consciousness”. So the word “cause” can still be used. However, we don’t have the following:

causes (or brain processes) which comes before an effect (consciousness or a mental state)

Consciousness (or a conscious state) “is not a separate entity from my brain”. It is, instead, “a feature of my brain”.

So Searle’s position isn’t too unlike Donald Davidson’s notion of “conceptual pluralism” (as expressed in his anomalous monism) squared with his “substance monism”. That is, the brain and consciousness are seen as being part of the same (loosely) substance— if with different features or properties which are characterised both from the subject’s inside and from the third-person outside. That is, we can apply different concepts to consciousness (or mind) which we wouldn’t apply to the brain itself. However, consciousness it still just a “feature of the brain”. It’s not something different. It’s not another substance.

So this argument works against mind-body dualism. However, it’s not a case of reductive physicalism either. Instead, Searle simply denies the duality of brain and consciousness (or matter and mind) in the first place.

And because Searle’s position on causation is (perhaps) odd when it comes to this notion of atemporal causation, then it may be wise to finish with citing what another philosopher thinks of it.

Take Nick Fotion. He tells us that Searle’s theory

“shows that the biological mechanisms on the lower level of the diagram have their causal effects on the upper level not over a period of time”.

Fotion continues:

“The emergent changes on the upper level are simultaneous with respect to what happens (vertically) below. Such is not the case when the mind affects the body on any level.”

Of course the central problem here is one of making sense of atemporal causation. That’s primarily because it’s usually believed that a sequence in time is built into the very notion of causation (or causality) — even when it comes to “backward causality”. What’s more, it seems that Searle could argue just about everything he does argue without also employing the notion of causation (or using the word “cause”). That is, he could argue that brain processes occur at the very same time as their — parallel? — conscious states without bringing causation on board at all.

All that said, this particular issue must be tackled at another time.

Tuesday 6 August 2019

Philosophical Shorts (2)


The Function of Experience

It's always odd when philosophers ask about “the function of experience” (or consciousness). After all, isn't it blatantly obvious what the many functions of experience/consciousness are? Don't we experience experience functioning ever day of our lives? Indeed every (waking) minute of our lives?

Though, the argument goes, we could be wrong about all this.

(The functions the philosopher David Chalmers always refers to are “perceptual discrimination, categorization, internal access, verbal report”.)

Questions about the function of experience/consciousness occur, primarily because many cognitive and behavioural functions do - and could - occur “in the dark” - that is, without experience/consciousness.

However, the following argument seems invalid:

i) Many cognitive and behavioural functions occur without experience/consciousness and they could occur without experience/consciousness.
ii) Therefore experience/consciousness has no function

But why not the following (not an argument)? -

i) Many cognitive and behavioural functions occur without experience/consciousness and they could occur without experience/consciousness.
ii) However, experience/consciousness still has a function.

Experience could (or does) add extra functions into the pot. So the argument above is not that unlike the following:

i) It is a fact that people drink water without using cups and they could drink water without using cups. (They drink water straight from the tap, from old boots, out of streams, etc.)
ii) Therefore cups have no function.

There are two other important reasons to question the function of experience/consciousness:

1) Experience/consciousness is epiphenomenal.
2) Although we believe that our experiences have a function; they don't. (A position advanced, I believe, by Daniel Dennett - though perhaps not as explicitly as this.)

The Why of the Big Bang

If one explains the Big Bang in terms of processes, forces, fields, particles, events, etc, then this is explaining how it came about. Yet someone may ask why it came about. What does portentous ‘why?’ mean in this context? 

If there is such a why to the Big Bang, then that may mean that it came about for some reason (or purpose). This may also mean that if the questioner doesn't allow the reason (or purpose) to be contained within such processes and interactions, then the reason (or purpose) for the Big Bang must be outside the event itself. In order for something to exist outside the Big Bang, it must exist outside of time and space. It must also be non-material.

Must it be God? But who created God? And if God can be a self-creator, then why not the universe too?

(The possibility of a multiverse, an infinite universe, etc., of course, complicates this issue.)

An Infinite Universe?

There's a paradox inherent in the idea of an infinite universe. An infinite future is possible; though perhaps not an infinite past. The argument against an infinite past has nothing to do with the belief that the universe must have been created at some time. It has to do with the implication which is inherent in the possibility of an infinite past itself. That is, if the past were infinite, then everything that could or might have happened would have happened. This conclusion quite clearly doesn't make sense as far as our own universe is concerned; though it is made possible if there are other universes. (Or, I should say, other universes within a greater universe – i.e., the multiverse.)

There is another possible scenario. That everything has happened within our universe, but all was destroyed by a previous contraction of the universe and we're now living in the very early stages of just one more expansion of a universe (ours) which that has expanded and contracted many times before!

There is an obvious problem here too. Literally everything couldn't have happened. There are two things left out here. One, technological developments in previous expansions of our universe – ones which might have stopped the universe from contracting. Two, and less feasibly, the destruction of the entire universe and multiverse.

Saturday 3 August 2019

Philosophical Shorts (03/08/2019)



Speed-of-Light Travel and Aging

The most basic measure of time is entropy. The processes of entropy would be much slower at just below the speed of light. Therefore if Mr X is travelling at near the speed of light, then he would age much less quickly than his friends on earth. However, all his biological and physical processes would be slower too. Indeed if we could get our biological processes to slow down without travelling at just below the speed of light, then perhaps we would age much less quickly here on earth too.

Could bodies which have evolved with processes of a certain ‘speed’ cope with such changes? This would also involve the slowing down of our mental (neurological) processes. That is, we would be both mentally slower and physically slower – like human slugs! As a guess, I would say that the human body couldn't cope with such changes. That's unless technological processes which counteracted the negative side-effects (as it were) could also be created.

Is Consciousness Constituted by Physical Processes?

David Chalmers asks an interesting question:

Is consciousness constituted by physical processes, or does it merely arise from physical processes?”

The consensus position is that consciousness “arises” from physical processes. At first blush, saying that “consciousness is constituted by physical processes” appears to be a reductive position. That is, if x in “constituted by” A, B and C, then that must surely mean the following:

If x = A, B & C
then x can also be reduced to A, B & C.

Nonetheless, if one tackles this from another angle, consciousness can be constituted by physical processes even though it it isn't identical to those processes. Thus, x is constituted by A, B and C, though x is not identical to A, B and C.

Here is a simple example. A house in constituted by bricks and other material objects. Though a house is not identical to the bricks and material objects which constitute it.

This doesn't work for consciousness because there are things true of consciousness which aren't true of brain/physical processes. Having said that, there are things true of bricks and other objects (as well as the sum of bricks and other objects) which aren't true of house which was made out of them. Yet some would say that the house is nothing “over and above” the bricks and other objects. Yes, in physical terms that's correct. However, there are still things true of the house which aren't true of the objects which “constitute it” - and not even true of the sum of the objects which constitute it. So, even though the house is entirely made of bricks and other objects/materials, there are things true of the house which aren't true of the things which constitute it (whether taken individually or collectively).


Sunday 28 July 2019

David Chalmers on the Abstract-Concrete Interface in Artificial Intelligence


Word Count: 4218


i) Introduction

ii) Implementation
iii) Causal Structure
iv) The Computer's Innards
v) Chalmers' Biocentrism?
vi) The Chinese Room


It's a good thing that the abstract and the concrete (or abstract objects in "mathematical space" and the "real world") are brought together in David Chalmers' account of Strong AI. Often it's almost (or literally) as if AI theorists believe that (as it were) disembodied computations can themselves bring about mind or even consciousness. (The same can be said, though less strongly, about functions or functionalism.) This, as John Searle once stated1, is a kind of contemporary dualism in which abstract objects (computations/algorithms – the contemporary version of Descartes' mind-substance) bring about mind and consciousness on their own.

To capture the essence of what Chalmers is attempting to do we can quote his own words when he says that it's all about “relat[ing] the abstract and concrete domains”. And he gives a very concrete example of this.

Take a recipe for a meal. To Chalmers, this recipe is a “syntactic object[]”. However, the meal itself (as well as the cooking process) is an “implementation” that occurs in the “real world”.

So, with regard to Chalmers' own examples, we need to tie "Turing machines, Pascal programs, finite-state automata", etc. to"[c]ognitive systems in the real world" which are

"concrete objects, physically embodied and interacting causally with other objects in the physical world".

In the above passage, we also have what may be called an "externalist" as well as "embodiment" argument against AI abstractionism. That is, the creation of mind and consciousness is not only about the computer/robot itself: Chalmers' "physical world" is undoubtedly part of the picture too.

Of course most adherents of Strong AI would never deny that such abstract objects need to be implemented in the "physical world". It's just that the manner of that implementation (as well as the nature of the physical material which does that job) seems to be seen as almost – or literally – irrelevant.

Implementation

It's hard to even comprehend how someone could believe that a programme (alone) could be a candidate for possession of (or capable of bringing about) a conscious mind. (Perhaps no one does believe that.) In fact it's hard to comprehend what that could even mean. Having said that, when you read the AI literature, that appears to be exactly what some people believe. However, instead of the single word “programme”, AI theorists will also talk about “computations”, “algorithms”, “rules” and whatnot. But these additions still amount to the same thing – “abstract objects” bringing forth consciousness and mind.

So we surely need the (correct) implementation of such abstract objects.

It's here that Chalmers himself talks about implementations. Thus:

Implementations of programs, on the other hand, are concrete systems with causal dynamics, and are not purely syntactic. An implementation has causal heft in the real world, and it is in virtue of this causal heft that consciousness and intentionality arise.”

Then Chalmers delivers the clinching line:

“It is the program that is syntactic, it is the implementation that has semantic content.”

More clearly, a physical machine is deemed to belong to the semantic domain and a syntactic machine is deemed to be abstract. Thus a physical machine is said to provide a “semantic interpretation” of the abstract syntax.

Yet how can the semantic automatically arise from an implementation of that which is purely syntactic? Well, that depends. Firstly, it may not automatically arise. And, secondly, it may depend on the nature (as well as physical material) of the implementation.

So it's not only about implementation. It's also about the fact that any given implementation will have a certain “causal structure”. And only certain (physical) causal structures will (or could) bring forth mind.

Indeed, bearing all this in mind, the notion of implementation (at least until fleshed out) is either vague or all-encompassing. (For example, take the case of one language being translated into another language: that too is classed as an “implementation”.)

Thus the problem of implementation is engendered by this question:

How can something concrete implement something abstract?

Then we need to the precise tie between the abstract and the concrete. That, in turn, raises a further question:

Can't any arbitrary concrete/physical implementation of something abstract be seen as a satisfactory implementation?

In other words, does the physical implementation need to be an “isomorphic” (a word that Chalmers' uses) mirroring (or a precise “mapping”) of the abstract? And even if it is, how does that (in itself) bring about the semantics?

And it's here that we arrive at causal structure.

Causal Structure

One can see how vitally important causation is to Chalmers when he says that

"both computation and content should be dependent on the common notion of causation".

In other words, a computation and a given implementation will share a causal structure. Indeed Chalmers cites the example of a Turing machine by saying that

"we need only ensure that this formal structure is reflected in the causal structure of the implementation".

He continues:

"Certainly, when computer designers ensure that their machines implement the programs that they are supposed to, they do this by ensuring that the mechanisms have the right causal organization."

In addition, Chalmers tells us what a physical implementation is in the simplest possible terms. And, in his definition, he refers to "causal structure":

"A physical system implements a given computation when the causal structure of the physical system mirrors the formal structure of the computation."

Then Chalmers goes into more detail:

"A physical system implements a given computation when there exists a grouping of physical states of the system into state-types and a one-to-one mapping from formal states of the computation to physical state-types, such that formal states related by an abstract state-transition relation are mapped onto physical state-types related by a corresponding causal state-transition relation."

Despite all that, Chalmers says that all the above is "still a little vague". He does so because we need to "specify the class of computations in question".

Chalmers also stressed the notion of (correct) "mapping". And what's relevant about mapping is that the

"causal relations between physical state-types will precisely mirror the abstract relations between formal states".

Moreover:

"What is important is the overall form of the definition: in particular, the way it ensures that the formal state-transitional structure of the computation mirrors the causal state-transitional structure of the physical system."

Thus Chalmers stresses causal structure. More specifically, he gives the example of “computation mirroring the causal organisation of the brain”. Chalmers also states:

“While it may be plausible that static sets of abstract symbols do not have intrinsic semantic properties, it is much less clear that formally specified causal processes cannot support a mind”.

In Chalmers' account, the concrete does appear to be primary in that the "computational descriptions are applied to physical systems" because "they effectively provide a formal description of the systems' causal organisation". In other words, the computations don't come first and only then is there work done to see how they can be implemented.

So what is it, exactly, that's being described?

According to Chalmers, it's physical/concrete "causal organisation". And when described, it becomes an "abstract causal organisation". (That's if the word "causal" can be used at all in conjunction with the word "abstract".) However, it is abstract in the sense that all peripheral non-causal and non-functional aspects of the physical system are simply factored out. Thus all we have left is an abstract remainder. Nonetheless, it's still a physical/concrete system that provides the input (as it were) and an abstract causal organisation (captured computationally) that effectively becomes the output.

Chalmers also adds the philosophical notion of multiple realisability into his position on AI. And here too he focuses on causal organisation. His position amounts to saying that the causal organisation instantiated by System A can be “mirrored” by System B, “no matter what the implementation is made out of”. So, clearly, whereas previous philosophers stressed that functions can be multiply realised, Chalmers is doing the same with causal organisation (which, in certain ways at least, amounts to the same thing). Or in more precise terms: System B “will replicate any organisational invariants of the original system, but other properties will be lost”. Nonetheless, here again we have the mirroring of one system by another, rather than having any discovery of the fundamental factors which will (or can) give rise to mind and consciousness in any possible system.

The Computer's Innards

As hinted at, the most important and original aspect of Chalmers' take on Strong AI is his emphasis on the "rich causal dynamics of a computer". This is something that most philosophers of AI ignore. And even AI technologists seem to ignore it – at least when at their most abstract or philosophical.

Chalmers firstly states the problem. He writes:

"It is easy to think of a computer as simply an input-output device, with nothing in between except for some formal mathematical manipulations."

However:

"This was of looking at things... leaves out the key fact that there are rich causal dynamics inside a computer, just as there are in the brain."

So what sort of causal dynamics? Chalmers continues:

"Indeed, in an ordinary computer that implements a neuron-by-neuron simulation of my brain, there will be real causation going on between voltages in various circuits, precisely mirroring patterns of causation between the neurons."

In more detail:

"For each neuron, there will be a memory location that represents the neuron, and each of these locations will be physically realised in a voltage at some physical location. It is the causal patterns among these circuits, just as it is the causal patterns among the neurons in the brain, that are responsible for any conscious experience that arises."

And when Chalmers compares silicon chips to neurons, the result is very much like a structuralist position in the philosophy of physics.

Basically, entities/things (in this case, neurons and silicon chips) don't matter. What matters is “patterns of interaction”. These things create “causal patterns”. Thus neurons in the biological brain display certain causal interactions and interrelations. Silicon chips (suitably connected to each other) also display causal interactions and interrelations. So perhaps both these sets of causal interactions and interrelations can be (or may be) the same. That is, they can be structurally the same; though the physical material which bring them about are clearly different (i.e., one is biological the other is non-biological).

Though what if the material substrate does matter? If it does, then we'd need to know why it matters. And if it doesn't, then we'd also need to know why.

Biological matter is certainly very complex. Silicons chips (which have just been mentioned) aren't as complex. Remember here that we're matching individual silicon chips with individual neurons: not billions of silicon chips with the the billions of neurons of the entire brain. However, neurons, when taken individually, are also highly complicated. Individual silicon chips are much less so. However, all this rests on the assumption that complexity - or even maximal complexity - matters to this issue. Clearly in the thermostat case (as cited by Chalmers himself), complexity isn't a fundamental issue or problem. Simple things exhibit causal structures and causal processes; which, in turn, determine both information and - according to Chalmers - very simple phenomenal experience.

Chalmers' Biocentrism?

Nobel laureate Gerald Edelman once said that mind and consciousness

can only be understood from a biological standpoint, not through physics or computer science or other approaches that ignore the structure of the brain”.

Of course many workers in AI disagree with Edelman's position.

We also have John Searle's position, as expressed in the following:

"If mental operations consist of computational operations on formal symbols, it follows that they have no interesting connection with the brain, and the only connection would be that the brain just happens to be one of the indefinitely many types of machines capable of instantiating the program. This form of dualism is not the traditional Cartesian variety that claims that there are two sorts of substances, but it is Cartesian in the sense that it insists that what is specifically mental about the brain has no intrinsic connection with the actual properties of the brain. This underlying dualism is masked from us by the fact that AI literature contains frequent fulminations against 'dualism'.”

Searle is noting the radical disjunction created between the actual physical reality of biological brains and how these philosophers and scientists explain and account for mind, consciousness, cognition and understanding.

Despite all that, Searle doesn't believe that only biological brains can give rise to minds and consciousness. Searle's position is that only brains do give rise to minds and consciousness. He's emphasising an empirical fact; though he's not denying the logical and metaphysical possibility that other things can bring forth minds.

Thus wouldn't many AI philosophers and even workers in AI think that this replication of the causal patterns of the brain defeats the object of AI? After all, in a literal sense, if all we're doing is replicating the brain, then surely there's no strong AI in the first place. Yes, the perfect replication of the brain (to create an artificial brain) would be a mind-blowing achievement. However, it would still seem to go against the ethos of much AI in that many AI workers want to divorce themselves entirely from the biological brain. So if we simply replicate the biological brain, then we're still slaves to it. Nonetheless, what we would have here is something that's indeed thoroughly artificial; but which is not a true example of Strong AI.

Of course Chalmers himself is well aware of these kinds of rejoinder. He writes:

"Some may suppose that because my argument relies on duplicating the neural-level organisation of the brain, it establishes only a weak form of strong AI, one that is closely tied to biology. (In discussing what he calls the 'Brain Simulator' reply, Searle expresses surprise that a supporter of AI would give a reply that depends on the detailed simulation of human biology.) This would be to miss the force of the argument, however."

To be honest, I'm not sure if I understand Chalmers' reason for believing that other theorists have missed the force of his argument. He continues:

"The brain simulation program merely serves as the thin end of the wedge. Once we know that one program can give rise to a mind even when implemented Chinese-room style, the force of Searle's in-principle argument is entirely removed: we know that the demon and the paper in a Chinese room can indeed support an independent mind. The floodgates are then opened to a whole range of programs that might be candidates to generate conscious experience."

As I said, I'm not sure if I get Chalmers' point. He seems to be saying that even though one system or programme only "simulates" the biological brain, it's still, nonetheless, a successful simulation. And because it is a successful simulation, then other ways of creating conscious experience must/may be possible. That is, the computational program or system has only simulated the "abstract causal structure" of the brain (to use Chalmers' own terminology): it hasn't replicated biological brains in every respect or detail. And because it has gone beyond biological brains at least in this (limited) respect, then the "floodgates are [] opened to a whole range of programs" which may not be (mere) "Brain Simulators" or replicators.

Despite all the above, the very replication of the brain is problematic to start with. Take the words of Patricia Churchland:

“[I]t may be that if we had a complete cognitive neurobiology we would find that to build a computer with the same capacities as the human brain, we had to use as structural elements things that behaved very like neurons”.

Churchland continues by saying that

“the artificial units would have to have both action potentials and graded potentials, and a full repertoire of synaptic modifiability, dendritic growth, and so forth”.

It gets even less promising when Churchland adds:

[F]or all we know now, to mimic nervous plasticity efficiently, we might have to mimic very closely even certain subcellular structures.”

Readers may also have noted that Churchland was only talking about the biology of neurons, not the biology of the brain as a whole. However, wouldn't the replication of the brain (as a whole) make this whole artificial-mind endeavor even more complex and difficult?

In any case, Churchland sums up this immense problem by saying that

“we simply do not know at what level of organisation one can assume that the physical implementation can vary but the capacities will remain the same”.

That's an argument which says that it's wrong to accept the programme-implementation “binary opposition” in the first place. Though that's not to say - and Churchland doesn't say - that it's wrong to concentrate on functions, computation or cognition generally. It's just wrong to completely ignore the “physical implementation” side of things. Or, as Churchland says at the beginning of one paper, it's wrong to “ignore neuroscience” and focus entirely on function, compunctions, algorithms, etc.

The Chinese Room

In respect to the points just made, Chalmers also tackles the Chinese Room argument of John Searle.

So is the Chinese Room meant to be a replication or simulation (however abstract and minimalist) of the biological brain in the first place? If it isn't, then Chalmers immediately moving on to the Chinese Room almost seems like a non sequitur... except, as stated, Chalmers has said that once the Brain Simulator program has been successful in bringing forth a mind, then the "floodgates are open" – and that must include the well-known Chinese Room.

Chalmers gives us an example of what he calls “causal organisation” which relates to the Chinese Room.

Firstly, Chalmers states that he "supports the Systems reply", according to which

"the entire system understands Chinese even if the homunculus doing the simulating does not".

As many people know, in the Chinese Room there is a “demon” who deals with “slips of paper” with Chinese (or “formal”) symbols on them. He therefore indulges in “symbol manipulation”. Nonetheless, all this also involves an element of causal organisation; which hardly any philosophers seem to mention. Chalmers says that the slips of paper (which contain Chinese symbols) actually

“constitute a concrete dynamical system with a causal organisation that corresponds directly of the original brain”.

At first blush, it's difficult to comprehend what sort of causal organisation is being instantiated by the demon dealing with slips of paper with formal/Chinese symbols written on them. Sure, there are causal events/actions going on – but so what? How do they bring about mind or consciousness? Here again the answer seems to be that they do so because they are “mirroring” what Chalmers calls the “original brain”. But even here it's not directly obviously how a demon manipulating formal symbols in a room can mirror the brain in any concrete sense.

So we need Chalmers to help us here. He tells us that the “interesting causal dynamics are those which take place among the pieces of paper”. Why? Because they “correspond to the neurons in the original case”. The demon (who is stressed in other accounts of the Chinese Room scenario) is completely underplayed by Chalmers. What matters to him is that the demon “acts as a kind of causal facilitator”. That is, the demon isn't a homunculus (or the possessor of a mind) within the Chinese Room – or at least not a mind that is relevant to this actual scenario. (So why use a demon in this scenario in the first place?)

Nonetheless, it's still difficult to comprehend how the actions of a demon playing around with slips of paper (with formal/Chinese symbols on them) can mirror the human brain - or anything else for that matter. I suppose this is precisely where abstraction comes in. That is, a computational formalism captures the essence (or causally invariant nature) of what the demon is doing with those slips of papers and formal symbols.

Having said all the above, Chalmers does say that in the Systems reply "there is a symbol for every neuron"

This "mapping", at first sight, seems gross. What can we draw from the fact that there's a "symbol for every neuron"? It sounds odd. But, of course, this is a causal story. Thus:

"[T]he patterns of interaction between slips of paper bearing those symbols will mirror patterns of interaction between neurons in the brain."

This must mean that the demon's interpretation of the symbols on these slips of paper instantiate a causal structure which mirrors the "interaction between neurons in the brain". This is clearly a very tangential (or circuitous) mapping (or mirroring). Still, it may be possible... in principle. That is, because the demon interprets the symbols in such-and-such a way, then he also causally acts in such-and-such a wayAnd those demonic actions "mirror [the] patterns of interaction between neurons in the brain". Thus it doesn't matter if the demon "understands" the symbols, he still ties each symbol to other symbols on other slips of paper. And thus these symbols bring about the specific physical actions of the demon. To repeat: all this occurs despite the fact that the demon doesn't "understand what the symbols mean". However, he does know how to tie (or connect) certain symbols to certain other symbols.

And, despite all that detail, one may still wonder how this brings about consciousness or mind. That is, how does such circuitous (or tangential) mirroring (or mapping) bring about consciousness or mind? After all, any x can be mapped to any y. Indeed any x & y can be mapped to any other x & y. And so on. Yet Chalmers states that

"[i]t is precisely in virtue of this causal organization that the system possesses its mental properties".

Don't we have a loose kind of self-reference here? That is, a computational formalism captures the Chinese Room scenario – yet that scenario itself also includes formal symbols on slips of paper. So we have a symbolic formalism capturing the actions of a demon who is himself making sense (if without a semantics) of abstract symbols.

Another way of putting this is to ask what has happened to the formal symbols written on the slips of paper. Chalmers stresses the causal role of moving the slips of paper around. But what of the formal symbols (on those slips of paper) themselves? What role (causal or otherwise) are they playing? His position seems to be that it's all about the demon's physical/causal actions in response to the formal symbols he reads or interprets.

In any case, Chalmers stresses that it's not the demon who matters in the Chinese Room – or, at least, the demon is not the primary factor. What matters is the “causal dynamics in the [Chinese] room”. Those causal dynamics “precisely reflect the causal dynamics in the skull”. And, because of this, “it no longer seems so implausible to suppose that the system gives rise to experience”. Having said that, Chalmers does say that "e]ventually we arrive at a system where a single demon is responsible for maintaining the causal organization". So despite the demon being in charge (as it were) of causal events/actions in the Chinese Room, he still doesn't understand the formal symbols he's manipulating.

Another problem is that fact that although the demon may not understand the Chinese symbols, he still, nonetheless, has a mind and consciousness. And employing a conscious mind as part of this hypothetical Chinese room scenario (in order to bring about an artificial conscious mind) seems a little suspect.

To repeat: if one is precisely mirroring the brain in the Chinese Room (or the individuals of the population of China in another thought experiment from Ned Block), then do we have AI at all? In a sense we do in that such a system would still be artificial. But, in another sense, if it's simply a case of mirroring/mapping the biological brain, then perhaps we haven't moved that far at all.

*******************************************

Note:

1) Thus a detour towards John Searle may be helpful here.

Searle actually accuses those who accuse him of being a “dualist” of being, well, dualists.

His basic position on this is that if computationalists or functionalists, for example, dispute the physical and causal biology of brains and exclusively focus on syntax, computations and functions (i.e., the form/role/structure rather than the physical embodiment/implementation), then that will surely lead to a kind of dualism. In other words, there's a radical disjunction created here between the actual physical and causal reality of the brains and how these philosophers explain and account for intentionality, mind and consciousness.

Thus Searle's basic position on this is:

i) If Strong AI proponents, computationalists or functionalists, etc. ignore or play down the physical biology of brains; and, instead, focus exclusively on syntax, computations and functions (the form/role rather than the physical embodiment/implementation),
ii) then that will surely lead to some kind of dualism in which non-physical abstractions basically play the role of Descartes' non-physical and “non-extended” mind.

Searle is noting the radical disjunction created between the actual physical reality of biological brains and how these philosophers and scientists explain and account for mind, consciousness and understanding.