Monday 9 July 2018

Book Reviews (1): David Chalmers' *The Conscious Mind*


The Conscious Mind: In Search of a Fundamental Theory is the best book I’ve read on a single philosophical subject. Of course I may believe that simply because David Chalmers tackles subjects that I’m interested in and he does so in a way I appreciate. Nonetheless, there’s a fairly substantial consensus on this book — at least among those people who care about this issue and who’re part of the “analytic tradition”

For example, in 1996 the well-known American philosopher David Lewis wrote (some five years before he died in 2001) that The Conscious Mind “is exceptionally ambitious and exceptionally successful — the best book in philosophy of mind for many years”. Similarly, the British philosopher Colin McGinn said that the book is “one of the best discussions in existence, both as an advanced text and as an introduction to the issues”. And then Steven Pinker said that “The Conscious Mind is an outstanding contribution to our understanding of consciousness”.

My view is that The Conscious Mind: In Search of a Fundamental Theory has insights on virtually every page, is dense with argumentation and is clearly written; despite sometimes being technical. 

I just said that I believe it’s the best book on a philosophical subject for a long time. I say that even though I don’t agree with everything David Chalmers argues. Indeed I don’t even agree with most of what he argues. For example, I have serious problems with Chalmers’ very strong and frequent emphasis on logical possibility, zombies and intuition (which are all connected together by Chalmers). Still, Chalmers argues his case in a very strong manner; though obviously not strongly enough to convince me.

The Conscious Mind: In Search of a Fundamental Theory dates back to 1996. It was Chalmers’ first book; though he’d published academic papers before this (some of which date back to 1990).

*********************************
“I have advocated some counterintuitive views in this work. I resisted mind-body dualism for a long time, but I have now come to the point where I accept it… I can comfortably say that I think dualism is very likely true. I have raised the possibility of a kind of panpsychism. Like mind-body dualism, this is initially counterintuitive, but the counterintuitiveness disappears with time… on reflection it is not too crazy to be acceptable… If God forced me to bet my life on the truth or falsity of the doctrines I have advocated, I would be fairly confidently that experience is fundamental, and weakly that experience is ubiquitous.” — David Chalmers
I decided not to tackle any of David Chalmers’ topics in this review simply because the book is so dense with arguments. It just didn’t make sense to single out anything specific. And even if I had done, it would probably have turned this review into something else entirely.

Broadly speaking, Chalmers still holds most of the positions he articulated in this book. 

The Conscious Mind (as stated) is dense with argumentation. And partly because of that, Chalmers’ book fluctuates between reading like a paper in a technical philosophical journal (even if he steers away from soulless academese) and being a “popular philosophy” book. However, to be honest, though Chalmers’ writing is very clear, he rarely pulls off stuff that could be sensibly classed as “popular philosophy”. Indeed in the introduction Chalmers says that his “notional audience at all times has been [his] undergraduate self of ten years ago”. That’s not to say that there are no simple parts (or even simple chapters) in this book — there are. However, on the whole, it’s more technical than most “introductory” or popular books on philosophical subjects.

For example, the section ‘Supervenience and Explanation’ (which itself includes five chapters) is highly technical. Indeed one section seems like a convoluted detour into modal logic, possible-worlds theory and semantics. I suppose that Chalmers would see all this as being a necessary technical grounding for what comes later. Indeed in some of these chapters there’s hardly any discussion of consciousness. This is especially true of the long and technical chapter called ‘A posteriori necessity’ which is ten pages long and doesn’t contain a single mention of consciousness or the mind. The following twenty-four pages hardly mention consciousness either.

The most interesting chapters in the book (at least from a 2020 perspective) are ‘Naturalist Dualism’ and ‘Consciousness and Information: Some Speculation’ (which deals with panpsychism). That’s primarily because naturalistic dualism is peculiar to Chalmers himself and panpsychism has a lot of contemporary relevance. Many of the other chapters, on the other hand, have been done to death in analytic philosophy; specifically the stuff on qualia, phenomenal consciousness, the nature of reduction, etc. Having said that, since this book was written in 1996, perhaps these subjects hadn’t really been done to death at that precise moment in philosophical history.

The last chapter, ‘The Interpretation of Quantum Mechanics’, seems rather odd to me. It’s a strange add-on. It’s very difficult to see how Chalmers’ take on the various interpretations of quantum mechanics fits into the rest of the work. Here again consciousness is hardly mentioned. When it is mentioned, it’s in relation to how consciousness has been featured in the scientific tradition of quantum mechanics. Thus there’s stuff about observation and measurement. Consciousness also features more heavily when Chalmers covers Hugh Everett’s interpretation of quantum mechanics in which “superposition is extended all the way to the mind”. The idea of superposed minds is also tackled — and it’s all very strange!

I suppose that one reason that Chalmers writes twenty-five pages on the various interpretations of quantum mechanics is that the quantum mechanics-consciousness connection was becoming fashionable in the 1990s. However, it seems that Chalmers believed that most citations of quantum mechanics — when it came to consciousness — didn’t solve what he calls “the hard problem of consciousness”. And neither was he too sympathetic with the idea of “superposed minds” within the strict context of Hugh Everett’s “many-worlds interpretation” (which Chalmers believes is a misreading of the physicist’s theory).

The chapter ‘Consciousness and Information: Some Speculations’ is — obviously! — the most speculative. Especially the section on panpsychism. Indeed Chalmers happily admits that. He even says that “[t]he ideas in this chapter” are “most likely to be entirely wrong”. Whether or not Chalmers believe that now — some 25 years later — is hard to say. He’s certainly added much to his position on panpsychism; as well to his position on information theory.

Perhaps the chapters ‘Supervenience and Explanation’ and ‘The Irreducibility of Consciousness’ are the most important in The Conscious Mind. As stated earlier, there are also some technical (as well as somewhat tangential) sections in these chapters too. (The chapter on qualia is also detailed and technical.) It’s in these chapters that Chalmers articulates his most central and important point about consciousness: that it’s not reducible to the physical. It’s also here that he also states that “experience is a datum in its own right”. Therefore experience (or consciousness) needs to be treated that way.




Wednesday 27 June 2018

Philosophy Now - "What are the moral limits to free speech?"



"What are the moral limits to free speech?

“Please give and justify your answer in less than 400 words.” - Philosophy Now (April/May 2018)

*******************

Dear Editor,

It's odd really. Many people claim to be strongly in favour of free speech. Yet, as soon as you scratch the surface, you'll quickly find that almost all the people you talk to quickly realise (or acknowledge) that there must be at least some limits to free speech.

But there's a problem here.

People cite very different reasons as to why there should be limits to free speech. They also cite different examples of the kind of speech they believe should be limited (or made illegal). Having said that, it's also true that there are some well-known limits to free speech which almost everyone agrees upon. (Such as “shouting 'Fire!' in a crowded cinema” or encouraging paedophilia in public spaces.) Nevertheless, other proposed limits to free speech often tend to simply reflect people's extremely specific political biases. And because of that, it can be said that free speech would be drastically curtailed if all our political biases were acted upon by the state or by the legal system.

So perhaps any limits which are placed on free speech should be given a moral – i.e., not a political – justification. (Of course this is hinted at in the opening question.) Yet some people may now say that morality and politics are firmly intertwined when it comes to free speech! However, surely the two can be separated if the proposed limits on free speech are given abstract moral (as well as philosophical) justifications. In that way, even people who strongly disagree when it comes to politics could (at least in theory) accept such limits if they were given such moral justifications.

Despite all that, almost every moral justification of a limit to free speech will have its exceptions and opponents. It's also the case that extreme or perverse limitations on free speech could be morally justified. (Such as the argument that allowing people to debate race or violence will inevitably encourage racism or violence.) Self-referentially speaking, even limiting (or banning) the public discussion of the question “What are the moral limits to free speech?” could be morally justified.

Surely this must mean that no single moral justification of the limits of free speech will ever receive universal approval or acceptance. Nonetheless, a complete consensus may not be required in the first place. After all, no philosophical, moral or political justification or position will ever please everyone. And that, of course, isn't necessarily a bad thing.

Yours,
Paul Austin Murphy.


Friday 8 June 2018

My Letter to Philosophy Now - 'Heidegger's Ways of Being'


Dear Editor,

In the Philosophy Now piece, 'Heidegger's Ways of Being', Andrew Royle claims that Martin Heidegger offered us a “direct refutation of Rene Descartes' solitary introspection”. Is that really the case?

Descartes “global scepticism” was an epistemological exercise. It had little - or nothing - to do with ontology. It was about how Descartes – or about how we - could know, and then philosophically demonstrate, that (to use Andrew Royle's own words) “the world and other people actually exist”. It wasn't even that Descartes didn't believe that the world and other people existed. Descartes' enterprise was about his knowledge of other people's existence. Indeed Descartes' initial scepticism is also what's called “methodological scepticism” (or “methodological doubt”). That is, it was supposed to be sure route to knowledge. It was a philosophical method which was designed to show us that knowledge of the world and other people is possible.

As for the Heideggerian grammar of the word 'I'.

Say, for argument's sake, that the use of the word 'I' also (as Royle puts it) “necessarily refers to... 'you' or an 'other'”. How did Descartes know that all the people he'd experienced weren't also the simulations of an “evil demon”? Thus such simulations (or mental distortions) might have also grounded Descartes' use of the word 'I'.

To put that another way. If the Matrix and “brain-in-a-vat” (Hilary Putnam) scenarios are possible, then it's equally possible that the simulations we have of other people may ground our use of the word 'I'. Indeed one can even say that a Heideggerian notion of “social Being” (or Dasein) can exist alongside Cartesian scepticism – indeed even if the Matrix and brain-in-a-vat scenarios are possible. (Putnam, of course, argues that his own scenario isn't possible – and for loosely Heideggerian reasons!)

As for solipsism. To quote Arthur Royle himself:


“Although Heidegger's argument works to abate Descartes' solipsism... Whilst the 'I' (or 'ego') was indubitably alone for Descartes...”

In everyday-life terms, Descartes would have left his doubts well behind after he'd solved (or thought he'd solved) the “sceptical problem”. (Just as Hume forgot his own scepticism when playing billiards.)

This means that Descartes most certainly wasn't a solipsist. (Though it can of course be said that he was a “methodological solipsist” for the duration of the Cogito.)

A genuine solipsist is someone who does indeed have an ontological position on what Royle calls the “I” or “ego”. What's more, a solipsist feels the reality of his solipsism throughout his life. (Or at least he does so for as long as he's a solipsist or thinks about his philosophical predicament.) Descartes, on the other hand, took a journey from his radical scepticism to a sure knowledge (or so he believed) of the world and other people. Now that's very far from being solipsism.

Not only that: solipsism has ontological and ethical implications. However, that isn't really the case when it comes to Descartes' scepticism. Having said that, it's indeed the case that certain political and sociological theorists have interpreted Descartes' scepticism as a 17th-century philosophical expression of “bourgeois individualism”. Yet even if that were true, Descartes never made this explicit. With Heidegger and solipsists, on the other hand, their ontological and ethical positions are indeed made explicit.

Yours,
Paul Austin Murphy.



Saturday 2 June 2018

James Ladyman on Structural Realism


In his Understanding Philosophy of Science), James Ladyman says that “structural realism” was “introduced” by the philosopher John Worrall. 

This position - within the philosophy of science (though mainly within the philosophy of physics) - has it that structures are fundamental. What's more, structures are real (hence the word “realism”).

At the heart of structural realism is the idea that physics essentially deals with structures, not with “things” or entities. More importantly, it is these structures which are retained in physics; not the things which physics posits. That is, such structures can be passed on from an old theory to a new theory (i.e., when both theories are - ostensibly - about the same phenomenon or problem).

Thus structural realism is a realism about structure, not about things, conditions or “empirical content”. As Ladyman puts Worrall's position:

... we should not accept full blown scientific realism, which asserts that the nature of things is correctly described by the metaphysical and physical content of our best theories. Rather, we should adopt the structural realist emphasis on the mathematical or structural content of our theories.”

More relevantly

Since there is (says Worrall) retention of structure across theory change, structuralism realism both (a) avoids the force of the pessimistic meta-induction (by not committing us to beliefs in the theory's description of the furniture of the world), and (b) does not make the success of science... seem miraculous...”

Thus structural realism has an argument against the well-known position of “pessimistic meta-induction”: i.e., such pessimism only applies to “empirical content” and “theory”, not to structure. That is, structure is retained “across theory change” and thus total inductive pessimism is unjustified.

So three questions arise here:

i) What is structure?
ii) Is structure really retained “across theory change”?
iii) And (this is related to i) above) how is structure distinguished from “empirical content” and “theory”?

Henri Poincaré's Structuralism

James Ladyman traces structural realism back to Max Plank and Henri Poincaré. For example, Ladyman quotes Plank stating the following:

... 'Thus the physical world has become progressively more and more abstract; purely formal mathematical operations play a growing part.'...”

Indeed Plank's position is one which most physicists would uphold; even if they wouldn't use Plank's precise wording. Of course this is simply an indirect acknowledgment that that there would be no physics without “mathematical operations”; or, more broadly, without mathematics itself.

Thus, in a broad sense, the structural realism position is that it's all about the maths. Or, at the least, it's all about the mathematical structures noted by physicists.

Ladyman also tells us that Poincaré

talks of the redundant theories of the past capture the 'true relations' between the 'real objects which Nature will hide for ever from our eyes'...”

So here Poincaré's response to the pessimistic meta-induction is to argue that “true relations” are retained from theory to theory; even if the things or phenomena mentioned in the theories aren't. As can also be seen, Poincaré uses different jargon to contemporary structural realists in that he talked of “true relations” rather than “structures”. Then again, structural realists also make extensive us of the word “relations”. After all, it's the structures of the physical world which account for these relations; and, in a sense, they're also constituted by such relations.

Nonetheless, Poincaré does seem to depart from contemporary structural realism when he talked about “real objects”. As can be shown, structural realists (especially ontic structural realists) dispense entirely with objects or things - “real” or otherwise. Or at least they believe that “every thingmust go”. Despite that, since Poincaré qualifies his reference to “real objects” with the clause that “Nature will hide [these real objects] for ever from our eyes”, then it can be said that Poincaré – effectively - did indeed dispense with objects/things too. That is, when Poincaré used the phrase “for ever from our eyes”, presumably he wasn't only talking about literal visual (or observational) contact with real objects. He must have also meant any kind of contact with them – including (as it were) theoretical contact. Thus Poincaré's real objects were little more than Kantian noumena and therefore of little use in physics. Then again, since Poincaré was also a Kantian, noumena might well have had a role to play in his metaphysics and physics.

So was Poincare a Kantian and a structural realist at one and the same time?

Just as Poincaré used the words “true relations” instead of the word “structure”, so the philosopher of science Howard Stein uses the word “Forms”. That is, Stein says (as quoted by Ladyman) that

our science comes closest to comprehending 'the real', not in its account of 'substances' and their kinds, but in its account of the 'Forms' which phenomena 'imitate' (for 'Forms' read 'theoretical structures', for 'imitate', 'are represented by'”.

Clearly Stein is attempting to tie contemporary structural realism to a long philosophical - and indeed Platonic - tradition. He does so with his use of the words “Forms” (with a platonic 'H') and “imitate”. Then again, he also rejects the equally venerable (i.e., in the history of philosophy) “substances” and “kinds”.

Having said that, this very same passage can be read as expressing the position that Forms (or “theoretical structures”) are actually imitating (or “representing”) “substances and their kinds”. So, as with Kantian noumena, it's not that substances don't exist: it's that our only access to them is through theoretical structures: i.e., through the mathematical structures and models of physics. If this reading of Stein is correct, then that makes his position almost eliminativist. As with Kant's noumena, Poincaré's “real objects” and ontic structural realism's “things”, aren't Stein's “substances” also (to use Wittgenstein's words) “idle wheels in the mechanism”? What purpose do they serve? Do they serve as a abstract Kantian “grounding” or as a Lockean “I know not what”? Or, to quote Wittgenstein again, perhaps it's best to conclude: “Whereof one cannot speak thereof one must be silent.”

Examples from Physics

Maxwell and Fresnel

James Ladyman cites John Worrall's example of the structural elements of Augustin-Jean Fresnel's theory (of light waves) passing over to James Clerk Maxwell's later theory. Ladyman quotes Worrall thus:

... 'There was an important element of continuity in the shift from Fresnel to Maxwell.'... ”

More relevantly, this

'was much more than a simple question of carrying over the successful empirical content into the new theory'...”

However, neither was it just a case of “carrying over or the full theoretical content or full theoretical mechanisms”.

Thus, if it's not just a case of “empirical content” and “theoretical content” being “carried over”, then what else was also carried over? The answer to this is: structure. That is, Fresnel's theory shares a certain structure with Maxwell's later theory. Or as Worrall himself puts it:

... 'There was continuity or accumulation in the shift, but the continuity is one of form or structure, not of content.'...”

Clearly Worrall doesn't see Fresnel's and Maxwell's theories as only being (what's often called) “empirically equivalent”. He states that it's not (only) “empirical contents” which are passed on. That must mean that the two theories are also theoretically “under-determined” by the empirical content.

This means that Fresnel's and Maxwell's theories are neither empirically nor theoretically identical. So does that mean that these two theories are structurally (or formerly) identical instead? Worrall may not also believe in complete structural identity between these two (or any) separate theories. However, he clearly does believe that structural identity is more important (or more substantive) than any empirical or theoretical identity.

Thus it follows from all this that we'll now need to know how it is, precisely, that structure is distinguished form both empirical and theoretical content when it comes to the theories of Fresnel and Maxwell – and indeed when it comes to any comparative theories in physics.

Newton & Quantum Mechanics

Worrall also attributes a structuralist position (if not an explicit acceptance of structuralism) to Issac Newton. Worrall describes Newton's structuralist reality (if not his position) in the following manner:

... 'On the structural realist view, what Newton really discovered are the relationships between phenomena expressed in the mathematical equations of his theory.'... ”

In certain respects, this is certainly true. For example, it's often and justifiably stated that quantum mechanics wouldn't so much as exist without its mathematical descriptions and predictions. John Horgan, for one, states that

mathematics helps physicists definite what is otherwise undefinable. A quark is a purely mathematical construct. It has no meaning apart from its mathematical definition. The properties of quarks – charm, colour, strangeness – are mathematical properties that have no analogue in the macroscopic world we inhabit.”

Isn't all the above just as true of much of Newton's work? However, it's certainly the case that Newton wasn't an eliminativist when it came to things/objects (or when it came to Poincaré's “real objects”). Despite that, it was still “mathematical equations” which captured the things or phenomena Newton was accounting for in his theories.

The question (as with quantum mechanics) is:

Is there any remainder after the mathematics (or mathematical structure) is taken away?

What's left? Kantian noumena or, well, literally nothing? Of course it's hard to defend an eliminativist position when it comes to Newton and the concrete things he was talking about (e.g., stars, the moon, etc.). However, eliminativism seems much more appealing and justified when it comes to the micro world of quantum mechanics. In this realm, everything really does seem to be mathematical. Quite simply, there are no genuine equivalents to the moon, stars and even gravity (at least Newtonian gravity) in the quantum realm. In other words, our only access to the micro world is through mathematics. Clearly, that can't also be said of the world as described by Newton.

Friday 18 May 2018

Searle & Patricia Churchland on Dualism in AI and Cognitive Science




John Searle accuses many of those who accuse him of being a “dualist” of being... dualists. Bearing in mind the philosophical ideas discussed in the following, his stance isn't a surprise.

Searle's basic position on this is:

i) If Strong AI proponents, computationalists or functionalists, etc. ignore or play down the physical biology of brains; and, instead, focus exclusively on syntax, computations and functions (the form/role rather than the physical embodiment),
ii) then that will surely lead to some kind of dualism in which non-physical abstractions basically play the role of Descartes' non-physical and “non-extended” mind.

Or to use Searle's own words:

"If mental operations consist of computational operations on formal symbols, it follows that they have no interesting connection with the brain, and the only connection would be that the brain just happens to be one of the indefinitely many types of machines capable of instantiating the program. This form of dualism is not the traditional Cartesian variety that claims that there are two sorts of substances, but it is Cartesian in the sense that it insists that what is specifically mental about the brain has no intrinsic connection with the actual properties of the brain. This underlying dualism is masked from us by the fact that AI literature contains frequent fulminations against 'dualism'.”

Searle is noting the radical disjunction created between the actual physical reality of biological brains and how these philosophers and scientists explain and account for mind, consciousness and understanding.

So Searle doesn't believe that only biological brains can give rise to minds, consciousness and understanding. Searle's position is that, at present, only biological brains do give rise to minds, consciousness and understanding. Searle is emphasising an empirical fact; though he's not denying the logical and metaphysical possibility that other things can bring forth mind, consciousness and understanding.

Searle is arguing that the biological brain is played down or even ignored by those in AI, cognitive science generally and many in the philosophy of mind. And when put that bluntly, it seems like an almost perfect description of dualism. Or, at the very least, it seems like a stance (or position) which would help advance a (non-Cartesian) dualist philosophy of mind.

Yet because those people just referred to (who're involved in artificial intelligence, cognitive science generally and the philosophy of mind) aren't committed to what used to be called a “Cartesian ego” (they don't even mention it), then the charge of “dualism” seems – superficially! - to be unwarranted. However, someone can be a dualist without being a Cartesian dualist. Or, more accurately, someone can be a dualist without that someone positing some kind of non-material substance formerly known as the Cartesian ego. However, just as the Cartesian ego is non-material, non-extended (or non-spatial) and perhaps also abstract; so too are the computations and the (as Searle puts it) “computational operations on formal symbols” which are much loved by those involved in AI, cognitive science and whatnot.

Churchland on Functionalism as Dualism

Unlike Searle, Patricia Churchland doesn't actually use the word “dualist” for her opponents; though she does say the following:

Many philosophers who are materialists to the extent that they doubt the existence of soul-stuff nonetheless believe that psychology ought to be essentially autonomous from neuroscience, and that neuroscience will not contribute significantly to our understanding of perception, language use, thinking, problem solving, and (more generally) cognition.”

Put in Churchland's way, it seems like an extreme position. Basically, how could “materialists” (when it comes to the philosophy of mind and psychology) possibly ignore the brain? 

It's one thing to say that 

“psychology is distinct from neuroscience”. 

It's another thing to say that psychology is 

“autonomous from neuroscience” 

and that 

“neuroscience will not contribute significantly to our understanding” of cognition. 

Sure, the division of labour idea is a good thing. However, to see the “autonomous” in “autonomous science” as being about complete and total independence is surely a bad idea. In fact it's almost like a physicist stressing the independence of physics from mathematics.

Churchland thinks that biology matters. In this she has the support of many others. 

For example, the Nobel laureate Gerald Edelman says that the mind

can only be understood from a biological standpoint, not through physics or computer science or other approaches that ignore the structure of the brain”.

In addition, you perhaps wouldn't ordinarily see Patricia Churchland and John Searle as being bedfellows; though in this issue they are. So it's worth quoting a long passage from Searle which neatly sums up some of the problems with non-biological theories of mind. He writes:

I believe we are now at a point where we can address this problem as a biological problem [of consciousness] like any other. For decades research has been impeded by two mistaken views: first, that consciousness is just a special sort of computer program, a special software in the hardware of the brain; and second that consciousness was just a matter of information processing. The right sort of information processing -- or on some views any sort of information processing --- would be sufficient to guarantee consciousness..... it is important to remind ourselves how profoundly anti-biological these views are. On these views brains do not really matter. We just happen to be implemented in brains, but any hardware that could carry the program or process the information would do just as well. I believe, on the contrary, that understanding the nature of consciousness crucially requires understanding how brain processes cause and realize consciousness.. ” 

In a sense, then, if one says that biology matters, one is also saying that functions aren't everything (though not that functions are nothing). Indeed Churchland takes this position to its logical conclusion when she more or less argues that in order to build an artificial brain one would not only need to replicate its functions: one would also need to replicate everything physical about it.

Here again she has the backup of Searle. He writes:

Perhaps when we understand how brains do that, we can build conscious artifacts using some non-biological materials that duplicate, and not merely simulate, the causal powers that brains have. But first we need to understand how brains do it.”

Of course it can now be said that we can have an artificial mind without having an artificial brain. Nonetheless, isn't it precisely this position which many dispute (perhaps Churchland does too)?

In any case, to use Churchland's own words on this subject, she says that

it may be that if we had a complete cognitive neurobiology we would find that to build a computer with the same capacities as the human brain, we had to use as structural elements things that behaved very like neurons”.

Churchland continues by saying that

the artificial units would have to have both action potentials and graded potentials, and a full repertoire of synaptic modifiability, dendritic growth, and so forth”.

It gets even less promising for functionalism when Churchland says that

for all we know now, to mimic nervous plasticity efficiently, we might have to mimic very closely even certain subcellular structures”.

Put that way, Churchland makes it sound as if an artificial mind (if not artificial intelligence) is still a pipe-dream.

Readers may also have noted that Churchland was only talking about the biology of neurons, not the biology of the brain as a whole. However, wouldn't the replication of the brain (as a whole) make this whole artificial-mind endeavor even more complex and difficult?

In any case, Churchland sums up this immense problem by saying that

we simply do not know at what level of organisation one can assume that the physical implementation can vary but the capacities will remain the same”.

That's an argument which says that it's wrong to accept the implementation-function “binary opposition” (to use a phrase from Jacques Derrida) in the first place. Though that's not to say - and Churchland doesn't say - that it's wrong to concentrate on functions or cognition generally. It's just wrong to completely ignore the “physical implementation”. Or, as Churchland says at the beginning of one paper, it's wrong to “ignore neuroscience” and focus entirely on function.

Churchland puts the icing on the cake herself by stressing function. Or, more correctly, she stresses the functional levels which are often ignored by functionalists.

Take the cell or neuron. Churchland writes that

even at the level of cellular research, one can view the cell as being the functional unit with a certain input-output profile, as having a specifiable dynamics, and as having a structural implementation in certain proteins and other subcellular structures”.

Basically, what's being said here is that in many ways what happens at the macro level of the mind-brain (in terms of inputs and outputs) also has an analogue at the cellular level. In other words, functionalists are concentrating on the higher levels at the expense of the lower levels.

Another way of putting this is to say what Churchland herself argues: that neuroscientists aren't ignoring functions at all. They are, instead, tackling biological functions, rather than abstract cognitive functions.


Thursday 17 May 2018

Searle on Artificial Intelligence (AI) and the Brain's Causal Powers



John Searle's position on artificial consciousness and  artificial understanding is primarily based on what he calls the “causal powers” of the biological (human) brain.

This is Searle himself on this subject:

Some people suppose that I am claiming that it is in principle impossible for silicon chips to duplicate the causal powers of the brain. That is not my argument... It is a factual question, not to be settled on purely philosophical or a priori grounds, whether or not the causal powers of neurons can be duplicated in some other material, such a silicon chips, vacuum tubes, transistors, beer cans, or some quite unknown chemical substances. The point of my argument is that you cannot duplicate the causal powers of the brain solely in virtue of instantiating a computer program, because the computer program has to be defined purely formally.”

[This passage can be found in Searle's paper 'Minds and Brains Without Programs' – not to be confused with his well-known 'Minds, Brains and Programs'.)

It does seem quite incredible that a fair few of those involved in artificial intelligence (AI) and cognitive science generally completely downplay - or even ignore - brains and biology. This is especially the case when one bears in mind that biological brains are the only things - at present! - which display consciousness and experience. Nonetheless, when it comes to intelligence, it can be said that computers already do indeed display intelligence or even that they are intelligent. (Though this too is rejected by many people.)

So Searle simply believes that there has to be some kind of strong link between biology and mind, consciousness and understanding in the simple sense that - at this moment in time - only biological systems have minds, consciousness and understanding. (Of course the question “How do we know this?” can always be asked here.) Thus there's also a strong link between biological brains and complex (to use Searle's words) “causal powers”. However, this doesn't automatically mean that mind, consciousness and understanding must necessarily be tied to biology. It just means that, at this moment in time, it is so tied. And that tight and strong link between biology and consciousness, mind and understanding is itself linked to the requisite complex causal powers which are only instantiated - so far! - by complex biological brains.

Thus Searle's talk about causal powers refers to the argument that a certain level of complexity is required to bring about the causal powers which are required for mind, consciousness and understanding (as well as for “intentionality” and semantics, in Searle's writings).

To repeat. Searle never argues that biological brains are the only things capable - in principle - of bringing about minds, consciousness and understanding. He says that biological brains are the only things known which are complex enough to do so. That means that it really is all about the biological, physical and causal complexity of brains.

Causal Powers?

The problem here is to determine what exactly Searle means by the words “causal powers”. We also need to know about the precise relation between such causal powers and consciousness, understanding and, indeed, intelligence.

One hint seems to be that the brain's causal powers seem to be over and above what is computable and/or programmable. Alternatively, perhaps it's just an argument that, at the present moment of time (in terms of technology), these complex causal powers are not programmable or computable.

Indeed at the end of the quote above, Searle moves on from talking about causal powers to a hint at his Chinese Room argument/s. So to repeat that passage:

The point of my argument is that you cannot duplicate the causal powers of the brain solely in virtue of instantiating a computer program, because the computer program has to be defined purely formally.”

The argument here is that something physical is required in addition to an abstract “computer program” and the computations/algorithms/rules/etc contained within it. And that something physical also happens to be biological – i.e., the brain. In other words, computer programmes are “purely formal[]”. Brains, on the other hand, are both biological and physical. Thus even if programmes or computations capture the “form” or syntax (as it were) of a brain's computations and even of its physical structure/s, they still don't capture its biological physicality. That is, they don't replicate the brain's causal powers.

However, if programmes were instantiated in non-biological physical constructions, then we could - at least in principle - replicate (or capture) both the forms/syntax/computational nature of the biological brain and also its causal powers. It's just that no physical computer (which runs abstract programmes) at present does replicate (or capture) the complex causal powers of the biological brain.