Sunday 31 March 2024

Margaret Boden on Qualia and Artificial Intelligence

 



(i) Introduction
(ii) Paul Churchland on Qualia
(iii) Colin McGinn and David Chalmers on Qualia
(iv) Aaron Sloman on Qualia
(v) Are Qualia Ineffable, Private, and Yet Causally Salient?
(vi) Dennett on Verbal Reports About Qualia


See my ‘Margaret Boden on Artificial Intelligence (AI) and Consciousness’ for a short introduction to both Margaret Boden herself, and her book AI: Its Nature and Future.

When it comes to qualia and artificial intelligence (AI), Boden discusses the ideas and theories of Paul Churchland and Aaron Sloman.

So let’s firstly deal with the Canadian philosopher Paul Churchland.

Paul Churchland on Qualia

Patricia and Paul Churchland

At first, Margaret Boden presents Paul Churchland as an identity theorist (see ‘Identity Theory’), rather than as an eliminativist materialist (see ‘Eliminative materialism’) . In Boden’s own words:

“For Churchland, this isn’t a matter of mind-brain correlation: to have an experience of taste simply is to have one’s brain visit a particular point in that abstractly defined sensory space.”

This means that Churchland doesn’t offer us those “mere correlations” which anti-physicalists and others sniff at.

Indeed, if you only stress correlations, then (arguably) you’ll always need to deal with David Chalmers’ “hard problem”. After all, if brain state X is (always?) correlated with the bitter taste of a lemon, then Chalmers and others can always ask the following question:

Why does brain state X give rise to the bitter taste of lemon (even if that taste is correlated with that brain state)?

In any case, Churchland isn’t an eliminativist about qualia: he’s actually an eliminativist about propositional attitudes. [See Churchland’s Eliminative Materialism and the Propositional Attitudes’, which was published way back in 1981.]

To clarify. Churchland, as a materialist, isn’t out to eliminate qualia: he’s out to identity qualia with (to use Boden’s words) “particular point[s] in that abstractly defined sensory space”.

So Churchland doesn’t deny (or reject) qualia: he simply offers us his own account of them. [See note 1 on Dennett’s own account of consciousness.]

In Boden’s own words:

[Paul Churchland] offers a scientific theory — part computational (connectionist), part neurological — defining a four-dimensional ‘taste-space,’ which systematically maps subjective discrimination (qualia) of taste onto specific neural structures. The four dimensions reflect the four types of taste receptor on the tongue.”

Of course, some anti-physicalists, all dualists and others believe that qualia cannot possibly be (specific or otherwise) “neural structures”. That’s simply because they don’t deem qualia to be physical in nature at all. Indeed, even some physicalists don’t believe that a token or type neural structure and a token or type quale can be one and the same thing.

Identity theorists (old and new), on the other hand, do believe that they are one and the same thing.

More clearly. To some anti-physicalists, and to all dualists, the idea that

all phenomenal consciousness is simply the brain’s being at a particular location in some empirically discoverable hyperspace”

is actually to eliminate qualia completely — if only according to their own definition. So perhaps such people could state the following:

Brain states are brain states. Qualia are qualia.

Or in Boden’s own terminology:

Particular locations in some empirically discoverable hyperspace are particular locations in some empirically discoverable hyperspace. Qualia are qualia in no particular location.

… But it’s not just mere correlations which physicalists need to deal with.

Colin McGinn and David Chalmers on Qualia

Colin McGinn

Margaret Boden also mentions (if only in passing) the British philosopher Colin McGinn and his own stance on the “causal link” between qualia and “the brain”. According to Boden, McGinn

“argued that humans are constitutionally incapable of understanding”

that link.

Yet perhaps this too is just another variation on the mere-correlations theme.

After all, and like most of the critics of physicalism, McGinn accepts that the correlations exist, and even that they’re relevant and important. However, how can we (philosophically) understand those correlations? How can we understand — or explain — the causal (or otherwise) link between a bit of the brain (or even the brain taken in toto) and, say, the quale we experience when drinking bitter lemon juice?

Boden covers this precise issue again in another place in her AI: Its Nature and Future. (This time when discussing the position of John Searle.) In line with McGinn, Boden states that

“qualia being caused by neuroprotein is no less counter-intuitive, no less philosophically problematic”

than stating that

“computers could really experience blueness or pain, or really understand language” .

Now what about the Australian philosopher David Chalmers?

Even though Chalmers (like McGinn) will be happy to accept that qualia are mapped onto specific neural structures, then that still leaves his “hard problem of consciousness” untouched. In other words:

Why does neural structure X lead to, say, the specific bitter taste of a lemon?

Moreover: Why don’t lemons taste like dog shit or like nothing at all?

But Hang on!

All this depends on what we take qualia to be in the first place. (See later section.)

Now for the philosopher and researcher Aaron Sloman on his (as it were) AI account of qualia.

Aaron Sloman on Qualia

Like Paul Churchland earlier, Aaron Sloman (whom Boden refers to many times in her book) doesn’t eliminate qualia either. Far from it.

Basically, Sloman sees qualia as being “hosted” by the brain. Thus, there’s clearly no elimination of qualia here.

Indeed, according to Sloman (if via Boden), qualia don’t even require a biological brain!

It seems, them, that Sloman’s way of looking at things could also be adopted by idealists, dualists, and even by Platonists…

Platonists?

Take what the physicist and mathematician Roger Penrose argued on this subject.

Penrose discussed what he called a “qualium” (i.e., the singular of ‘qualia’), and its relation to the brain.

Penrose wrote:

“Such an implementation [of an algorithm] would, according to the proponents of such a suggestion, have to evoke the actual experience of the intended qualium.”

If the precise hardware doesn’t at all matter, then only the given algorithm (or the virtual machine) matters. Of course, the algorithm (or virtual machine) would need to be implemented… in something.

Yet this may not at all be the case if we follow this AI position to its logical conclusion.

Are Qualia are Ineffable, Private, and Causally Salient?

As already hinted at, it can be said that Sloman’s position on qualia and consciousness may be somewhat appealing to some anti-physicalists and anti-reductionists in that he argues that we can’t (in Boden’s words) “identif[y] qualia with brain processes”. What’s more, consciousness and qualia “can’t be defined in the language of physical descriptions”

Yet, despite all that, qualia still have “causal effects”.

So Sloman’s account of qualia (if accurately presented by Boden) is odd in that, on the surface at least, it perfectly squares with the (as it were) traditional account of qualia. Yet, at the same time, it’s also bang up-to-date in terms of its scientific references.

Why use the words “traditional account of qualia” here?

Sloman believes that qualia are ineffable and private, yet also of causal relevance when it comes to human subjects.

In terms of Sloman’s position on ineffable qualia, Boden writes:

“Moreover, they cannot always be described — by higher, self-monitoring, levels of the mind — in verbal terms. (Hence their ineffability.)”

In terms of privacy, Boden continues:

“They can be accessed only by some other pars of the particular virtual machine concerned, and don’t necessarily have any behavioural expression. (Hence their privacy.)”

Finally, in terms of causality, Boden finishes off with the following words:

“They can have causal effects on behavior (e.g. involuntary facial expression) and/or on other aspects of the mind’s information processing.”

Most (if not all) of the above seems to go against Daniel Dennett’s case against qualia. What’s more, the addition of up-to-date scientific jargon (such as “computational states”, “information processing”, “virtual machines”, etc.) doesn’t seem to make much difference to that.

Privacy

In terms of privacy again.

In broad terms, mental privacy has always been problematic in philosophy. However, perhaps Sloman’s way around this is to bring on board (as Boden puts it) “other parts of the particular virtual machine”.

In Boden’s words, qualia

“can be accessed only by some other parts of the particular virtual machine concerned, and don’t necessarily have any behavioural expression”.

Specifically, qualia

“can be accessed only by some other pars of the particular virtual machine concerned”.

Thus, it may seem that we don’t have the old-style privacy here in which a human subject is the sole (as it were) owner of whatever it is that’s going on inside his mind and brain. Instead, we have different “parts” of a “machine”.

Yet it’s still the same (or singular) virtual machine that’s doing the accessing — even if it does have parts.

So does privacy still rule okay?

Sloman (at least as presented by Boden) has the same position on privacy as the philosopher Owen Flanagan. That is, Flanagan accepts that (mental) privacy exists. However, he doesn’t see it as being such a big deal for naturalism. (At least it’s not a big deal to Flanagan himself, as well as to other naturalists.)

Epiphenomenalism

One important point stressed by Sloman (if in Boden’s words) is that qualia “don’t necessarily have any behavioural expression”. (Boden adds: “Hence their privacy.”)

In this passage we have an account of qualia’s causal effects, as well as a hint that they can also be epiphenomenal

However!

A particular quale may not be behaviourally expressed, and still not be epiphenomenal. After all, this reference to “behavioural expression” is usually a reference to the subject verbally reporting his quale — or even just physically reacting to it. However, the quale may still be causally relevant even without such (as it were) outward signs.

Despite the seemingly Cartesian account of qualia’s privacy, ineffability, and non-necessary link to behaviour, Sloman still believes that qualia

“can have causal effects on behavior (e.g. involuntary facial expression) and/or on other aspects of the mind’s information processing”.

This seems obviously true.

If someone tastes a bitter lemon and grimaces, then clearly the bitter quale of a lemon has had a “causal effect[] on behavior”…

But is it a philosophically-conceived quale that we’re talking about here?

It depends.

However, let’s return to the statement that qualia “don’t necessarily have any behavioural expression”. This possibility would definitely go against Daniel Dennett’s stance on this matter.

Dennett on the Verbal Reports of Qualia

If qualia don’t “have any behavioural expression”, then they constitute (to quote Wittgenstein) “a wheel that can be turned though nothing else moves with”. (This position also goes against Dennett’s heterophenomenological stance on all qualia-talk.)

So can we also ask the following question here? —

If qualia aren’t behaviourally expressed, then how do we know that they exist at all?

Sure, a human subject may verbally report that his qualia exist. However, how would other human subjects know that his qualia exist? Or, less strongly, how would other subjects know that they his qualia have the qualities and characteristics which he says they have?

All this leads on to a fairly extensive section of Boden’s book in which she tackles Daniel Dennett’s position on this issue.

Boden actually quotes one of Dennett’s own dialogues between himself and someone called Otto. The following is part of Boden’s own extract from that dialogue:

[Otto] Look. I don’t just mean it. I don’t just think there seems to be a pinkish glowing ring; there really seems to be a pinkish glowing ring!
[Dennett] Now you’ve done it. You’ve fallen in a trap, along with a lot of others. You seem to think there’s a difference between thinking (judging, being of the firm opinion that) something seems pink to you and something really seeming pink to you. But there is no difference. There is no such phenomenon as really seeming — over and above the phenomenon of judging in one way or another that something is the case.”

This is essentially a position both against the ineffability of qualia, and in support of Dennett’s heterophenomenology.

Basically, Otto can’t rely on his own experience of qualia to defend their reality or existence.

So all we have are Otto’s “judgements” about, in this particular case, a particular quale.

Firstly, Otto claims to experience a “pinkish glowing ring”. To Dennett, this is fair enough. That is, he doesn’t reject that claim out of hand (or as it stands).

However, what of the quale itself?

Is it real?

Does it exist?

What qualities and characteristics does it have?

Well, Otto says that this quale is real because he experiences it. More specifically, Otto states that it “seems to be a pinkish glow”.

This means that Otto has moved on from this quale’s existence (or reality), to referring to his own personal (as it were) seemings.

So these seemings are surely real?

Dennett questions this too.

In response, Otto ups the ante by stating that he doesn’t don’t “just mean it” about the seeming. He continues:

“I don’t just think there seems to be a pinkish glowing ring; there really seems to be a pinkish glowing ring!”

So, again, we’ve moved from an implicit claim that the quale is real or exists, to the seeming-of-a-pinkish-glow being real or existing.

Again, forget the quale itself, what about the seeming-to-have-a-pinkish-glow?

Dennett’s main point is that all he has to go on (all we have to go on) are (at first) Otto’s judgments about the pinkish glow, and then, secondly, his judgments about the seemings-of-a-pinkish-glow.

Thus, we never get to either the quale or the seeming itself — only to the “verbal reports” about both.

However, is that a reason to deny reality or existence to (as it were) what’s behind the verbal reports?…

But what’s behind the verbal reports?

Are we in a loop of judgments here?


Note:

(1) This is parallel to Daniel Dennett’s position on consciousness. It’s not that he believes that “consciousness should be explained away”. It’s that his account of consciousness is at odds with at least some mainstream, as well as also philosophical, accounts of it.





Thursday 28 March 2024

Philosophy: My Posts (or Tweets) on X (7)

 


(i) Should We Trust Physicists?
(ii) Analytic Philosophy Is…
(iii) Carlos Fernandes Shouts About Sam Harris and Free Will
(iv) The infinite… what!?
(v) Marijuana and Alcohol…


Should We Trust Physicists?

There’s an element of truth to the meme above. Personally, I feel like I’m encroaching on sacred territory when I comment on physics — especially on string theory. Put simply, I don’t know the maths. Thus, I must rely on what philosophers call testimony

Not that any single testimony — even large scale — could ever be decisive when its comes to a layperson accepting a scientific idea or theory. After all, the Set of Physicists that is relied upon has members who often disagree with each other.

The other thing about this meme is that particle physicists themselves say that other “particle physicists are wrong”. And some physicists say that “string theorists are wrong”. Indeed, some string theorists say that other string theorists are wrong — at least on details.

Apart from all that, I don’t believe “physics influencers” influence physicists.

Analytic Philosophy Is…

Is analytic philosophy a Platonic Form?

Do all analytic philosophers do what John Gregg says that (other people?) say that they do?

As it is, analytic philosophers tend to take many different positions on many different subjects.

That said, I’m not even sure if I even understand John Gregg’s words. Are they an expression of his literary skills?

So if you take away John Gregg’s literary flamboyance, is there any actual argument left underneath?

Shorter

“computationalism is really popular among science-oriented people who don’t care much for philosophy”

frances kafka

Well, since the 1960s, many philosophers have embraced “computationalism”. It also became a major part of the philosophy of mind. (Check out what the American philosopher John Searle has had to say on this subject.)

“and think metaphysics is a waste of time, but computationalism itself’s a type of Hegelian idealism”

What?!

Carlos Fernandes Shouts About Sam Harris and Free Will

The notion of “free will” isn’t really a part of neuroscience. It’s a philosophical notion. Citations of neuroscience may help advance a philosophical argument. However, talk of “free will” itself isn’t really a part of neuroscience…

What are “pseudo-intellectual morons” anyway?

Are they people who dare to disagree with Carlos Fernandes?

Do people become pseudo the moment they articulate views Fernandes doesn’t agree with?

By the way, I don’t usually defend or attack the notion of free will. That’s because it entirely depends on how the words “free will” are being used in the specific debate, and by specific disputants. Also, I don’t spend much time on this ancient subject anyway…

But what I do know is that Fernandes’s rhetoric is best suited to the mindless political “debates” one often finds on X. In other words, the debate won’t move in any direction if all the people involved in it shout and display their emotions, just as Fernandes does.

Shorter

Should readers on X comment on tweets without actually reading the essays/papers/articles linked in those tweets?

I was tempted to respond to this one. But it just seems pointless without reading the paper.

So this must solely be an advert for her paper — if in the form of a tweet/post on X. Perhaps this academic wouldn’t deny that. I do the same thing.

The infinite… what!?

It may seem rhetorical, but I must be honest… I have absolutely no idea what any of that post/tweet means. Perhaps this is simply the problem we all face when we come across philosophical prose from a subsection of philosophy we’re unfamiliar with.

So we need to be careful not to be dismissive in a kneejerk manner. That said, I’m familiar with Kant, etc., and I still don’t understand it.

The passage/tweet comes across as some kind of “spiritual” set of pronouncements delivered in the prose style of a French poststructuralist… on six acid tabs.

Can anyone help me out here?

Perhaps I’m simply dumb.

Either that, or not spiritual enough to get it.

(*) Are the words “The infinite ‘I am’” from David Bowie’s song ‘Blackstar’?

Marijuana and Alcohol…

I’m not sure that the single sentence “Why should marijuana be illegal if alcohol is legal?” is meant to be a full self-sustained argument. At least, I don’t know many people who’d stick to a single statement on this controversial issue.

Various long and short arguments have been given for this position. Some of these arguments include data, analysis, interviews, studies, etc.

I also doubt that all the positions are “abstract” and “utilitarian” when it comes to what was stated in this tweet . Actually, I can imagine all sorts of takes on this — utilitarianism being just one of them.

“Political feasibility”?

Is that a hint at the simple fact that alcohol is now legal, whereas marijuana isn’t? Thus, it wouldn’t be politically feasible to make another (dangerous?) drug legal?

Why should people who smoke cannabis accept that?

So why not make alcohol illegal, and cannabis legal?…

That’s a joke. Well, it’s partly a joke.

And, of course, it may not be politically feasible

A lot of good and bad things are deemed not to be politically feasible…

Shorter

So the human brain is a “quantum brain”. (This is an actual phrase which has been used many times.) There are also quantum cups, quantum trees, quantum genitals, quantum books on “quantum weirdness”, etc. etc. etc.

Of course, an object being constructed and run by human beings according to quantum logic and principles, is very different to inanimate and animate objects being simply… quantum (i.e., regardless of human beings).


My X account can be found here.




Tuesday 26 March 2024

Philosophy: My Posts (or Tweets) on X (6)

 


(i) Does Theory Rule OK in the Philosophy of Mind?
(ii) “Everything is Mathematical.” So what!
(iii) The Essays I Should Have Written
(iv) Which Philosophers Must We Read?
(v) I Feel Free!
(vi) Is the Ethical Mystifying?

Sunday 24 March 2024

Margaret Boden on Artificial Intelligence (AI) and Consciousness

The following essay is a commentary on — and reaction to — Margaret Boden’s book AI: Its Nature and Future. In this book Boden discusses various technological, scientific and philosophical issues raised by artificial intelligence. Specifically, she tackles the issue of whether programs could ever be “genuinely intelligent, creative or even conscious”.

This essay isn’t a book review.

Part One:

(i) Introduction
(ii) Margaret Boden on Consciousness… and Zombies
(iii) Non-Physicalism, Brains, and Virtual Machines

Part Two:

(iv) Patricia Churchland on Why Brains Matter
(v) John Searle on Why Brains Matter
(vi) Roger Penrose on Why Brains Matter

As for my essay, I’ll be focussing specifically on what Boden has to say on consciousness. However, I’ll be placing this subject within the context of artificial intelligence (AI), which forms the basis of Boden’s own book.


Margaret Boden in the 1980s.

Margaret A. Boden was born in 1936. She’s a Research Professor of Cognitive Science who’s worked in the fields of artificial intelligence, psychology, cognitive science, computer science, and philosophy.

She was awarded a PhD in social psychology (her specialism was cognitive studies) by Harvard in 1968. Boden became a Fellow of the British Academy in 1983, and served as its vice-president from 1989 to 1991. In 2001, Boden was appointed an OBE for her services in the field of cognitive science.


Part One

Margaret Boden on Consciousness… and Zombies

Another book by Boden, published in 1977.

Given that Margaret Boden moves on to discuss consciousness in her book AI: Its Nature and Future (i.e., pages 120 to 146), it was almost inevitable that she’d also discuss zombies. (In the literature, philosophical zombies.)

Boden tells us that for the American philosopher Daniel Dennett and the philosopher and researcher (i.e., on artificial intelligence) Aaron Sloman (whom Boden mentions a fair few times in her book), “the concept of zombie is incoherent”.

[See note 1 on the words “incoherent” and “meaningless”.]

Why is that?

Boden continued:

“Given the appropriate behaviour and/or virtual machine, consciousness — for Sloman, even including qualia — is guaranteed. The Turing Test is therefore saved from the objection that it might be ‘passed’ by a zombie.”

Many would deem this to be an account of a very-crude type of behaviourism. That’s even if, in this instance, it’s being applied to a zombie, and also includes a reference to a “virtual machine”.

In detail.

To argue that

[g]iven the appropriate behaviour and/or virtual machine, consciousness [] is guaranteed”

is surely a behaviourist position. [See note 2.]

Is such “appropriate behaviour” or the relevant “virtual machine” what consciousness is?

Alternatively, does such appropriate behaviour or the relevant virtual machine bring about consciousness?

It will be seen later that the virtual machines must be implemented in physical media — despite their (as it were) virtuality. As for behaviour (including verbal behaviour) being what consciousness is, that seems almost impossible to accept.

Is it, then, that a virtual machine can manifest (or instantiate) consciousness — regardless of behaviour and physical implementation? In other words, is consciousness simply something that some virtual machines could (as it were) have? And if some virtual machines could have consciousness, then they could also display certain types of behaviour (including speech).

Boden’s argument is that virtual machines are (not can be) physically implemented in brains, as well as in much else.

In the context of consciousness and behaviour, Boden also mentions the Turing Test.

The position she present can be summed up in the following way:

If a machine behaves (or acts) as if it is intelligent, then it is intelligent.

Or more in tune with the Turing Test itself:

If a machine answers the requisite amount of questions correctly, in the requisite amount of time, then it is intelligent.

… But what has all this to do with consciousness?

Sure, this questioning-and-answering may well show us that a machine (or zombie?) is intelligent, but Boden also uses the words

[g]iven the appropriate behaviour and/or virtual machine, consciousness [] is guaranteed”.

in her book.

That’s because if this kind of behaviour is manifested, then consciousness must also be manifested. Therefore, this stance on consciousness becomes true by definition. That is:

(i) Behaviour x is always a manifestation (or instantiation) of consciousness. 
(ii) Machine A (or zombie B) behave in way
x
(iii) Therefore, machine A (or zombie B) must manifest (or instantiate) consciousness.

Non-Physicalism, Brains, and Virtual Machines

Margaret Boden mentions Aaron Sloman again. This time in reference to consciousness and qualia.

Aaron Sloman’s position (at least as presented by Boden) may be appealing to some anti-physicalists and anti-reductionists in that he argues that you can’t “identif[y] qualia with brain processes”. What’s more, consciousness and qualia “can’t be defined in the language of physical descriptions”. Yet, despite all that, qualia still “have causal effects”.

This position is still a kind of reductionism in that qualia are reduced to “computational states”. It just so happens that these computational states must also be “implemented in some underlying physical mechanism”.

A hint of this necessary requirement for physical implementation (to be discussed later) can be seen in the following passage from Boden:

“For computational states are aspects of virtual machines: they can’t be defined in the language of physical descriptions. But they can exist, and have causal effects, only when implemented in some underlying physical mechanism.”

Admittedly, to some readers it may seem obvious that physical implementation is required. However, to those AI theorists with a more Pythagorean or even dualist disposition (see later section), it may not be obvious. Alternatively, it may simply need stating to them.

So is this token physicalism (or even token reductionism) in that a virtual “computational state” always requires implementation in a physical mechanism? Again, it just so happens that there’s no single physical medium (so such AI theorists believe) that’s required to bring about consciousness.

Yet consciousness is still brought about via physical implementation.

So it doesn’t really seem to matter that virtual machines have a physically neutral and abstract “computational description” which (as it were) allows them to also be implemented in, say, a set of coke cans or in silicon.

Now this is a good place to introduce Patricia Churchland and John Searle into this philosophical fray.


Part Two

Patricia Churchland on Why Brains Matter

The Canadian-American philosopher Patricia Churchland believes that biology (or physical implementation) matters when it comes to consciousness.

Churchland's position is similar to Gerald Edelman’s, who also said that the mind (if not consciousness)

“can only be understood from a biological standpoint, not through physics or computer science or other approaches that ignore the structure of the brain”.

[Gerald Edelman was a biologist and Nobel laureate.]

Churchland herself takes Edelman's position to its logical conclusion when she (more or less) argues that in order to build an artificial brain, one would not only need to replicate the biological brain’s functions: one would also need to replicate everything physical about it.

Indeed, Boden herself mentions the complete replication of the human brain a couple of times in her book.

For example, she writes:

“If so, then no computer (possibly excepting a whole-brain emulation) could have phenomenal consciousness.”

Churchland has the backup of the American philosopher John Searle here. Searle writes:

“Perhaps when we understand how brains do that, we can build conscious artifacts using some non-biological materials that duplicate, and not merely simulate, the causal powers that brains have. But first we need to understand how brains do it.”

Of course, it can now be said that we may be able to have an example of artificial consciousness without also having an artificial brain. Nonetheless, isn’t it precisely this position which many dispute?

In any case, Churchland says that

“it may be that if we had a complete cognitive neurobiology we would find that to build a computer with the same capacities as the human brain, we had to use as structural elements things that behaved very like neurons”.

She continues:

[T]he artificial units would have to have both action potentials and graded potentials, and a full repertoire of synaptic modifiability, dendritic growth, and so forth.”

It gets even less promising when Churchland says that

“for all we know now, to mimic nervous plasticity efficiently, we might have to mimic very closely even certain subcellular structures”.

Put that way, Churchland makes it seem as if artificial consciousness (if not artificial intelligence) is still a pipe dream.

Churchland then sums up this big problem by saying that

“we simply do not know at what level of organisation one can assume that the physical implementation can vary but the capacities will remain the same”.

That’s an argument which says that it’s wrong to accept the implementation-function (to use a phrase from Jacques Derrida) “binary opposition” in the first place. However, that’s not to say — and Churchland doesn’t say — that it’s wrong to concentrate on virtual machines, functions or cognition generally. It’s just wrong to completely ignore the “physical implementation” in brains. Or, as Churchland herself puts it at the beginning of another paper, it’s wrong to “ignore neuroscience” and focus entirely on function.

It’s worth bringing in John Searle again here.

John Searle on Why Brains Matter

John Searle once wrote the following words:

“For decades, research has been impeded by two mistaken views: first, that consciousness is just a special sort of computer program, a special software in the hardware of the brain; and second that consciousness was just a matter of information processing. The right sort of information processing — or on some views any sort of information processing — would be sufficient to guarantee consciousness.”

Searle continued:

[I]t is important to remind ourselves how profoundly anti-biological these views are. On these views brains do not really matter. We just happen to be implemented in brains, but any hardware that could carry the program or process the information would do just as well.”

He then concluded with these words:

“I believe, on the contrary, that understanding the nature of consciousness crucially requires understanding how brain processes cause and realize consciousness.”

Oddly, Searle accuses many of those who accuse him of being a “dualist” of being…well, dualists.

Searle’s basic position on this is the following:

i) If AI theorists ignore (or simply play down) the physical biology of brains (or, in this essay’s case, the precise types of implementation of virtual machines),
ii) then that will surely lead to some kind of dualism in which non-physical abstractions (or virtual machines and “their” computations) basically play the role of Descartes’ non-physical and “non-extended” mind.

The position Searle is arguing against is expressed by Margaret Boden herself in the following way:

“As an analogy, think of an orchestra. The instruments have to work. Wood, metal, leather, and cat-gut all have to follow the laws of physics if the music is to happen as it should. But the concert-goers aren’t focussed on that. Rather, they’re interested in the music.”

In this passage from Boden, it can be seen that concert-goers (at the least) aren’t denying that “instruments have to work”, or that they’re made out of “metal, leather, and “cat-gut”. It’s just that they only care about “the music”.

Is this the same kind of thing with Aaron Sloman and other AI theorists who only care about virtual machines and their computational states?

So can there be music and consciousness without physical instruments and physical machines?

On a different tact.

Searle himself is simply noting the radical disjunction created between the actual physical reality of biological brains (or hardware implementations in Boden’s case), and how many AI theorists explain and account for mind and consciousness.

However, it must be stressed here that Searle doesn’t believe that only biological brains can give rise to minds, consciousness and understanding. Searle’s position is that, at present, only biological brains do give rise to minds, consciousness and understanding.

Searle is therefore simply emphasising an empirical fact. In other words, he’s not denying the logical and metaphysical possibility that other things could bring forth mind, consciousness and understanding.

Of course, the people just referred to (who’re involved in artificial intelligence, cognitive science generally and the philosophy of mind) aren’t committed to what used to be called the “Cartesian ego”. (They don’t even mention it.) This means that the charge of “dualism” seems to be a little unwarranted. However, someone can be a dualist without being a Cartesian dualist. Or, more accurately, someone can be a dualist without that person positing some kind of non-material substance (formerly known as the Cartesian ego).

That said, just as the Cartesian ego is non-material and non-extended (or non-spatial), so too are the (as Searle puts it) “computational operations on formal symbols” which are much loved by those involved in AI, cognitive science and whatnot.

Now let’s return to Aaron Sloman.

Roger Penrose on Why Brains Matter

The least that can be said about Aaron Sloman’s position (as discussed earlier) is that it isn’t dualist… in the traditional sense of that word. That said, Sloman’s stress on computational (therefore mathematical) states may, instead, come across as Pythagorean in nature.

Yet again, these (as it were) Pythagorean states still need to be physically implemented. That is, they may well exist in a pure abstract (platonic?) space. However, when they (as it were) come along with consciousness, then they must be physically implemented.

So the English mathematician and mathematical physicist Roger Penrose may seem like an odd person to bring in here. However, he particularly picked up on (what he deemed to be) some of the (hidden) Platonist assumptions of most AI theorists and some philosophers.

Roger Penrose, specifically, raises the issue of the physical (what he calls) “enaction” of a (in his case) algorithm. He wrote:

“The issue of what physical actions should count as actually enacting an algorithm is profoundly unclear.”

Then this problem is seen to lead — logically — to a kind of Platonism.

Penrose continues:

“Perhaps such actions are not necessary at all, and [] the mere Platonic mathematical existence of the algorithm would be sufficient for its ‘awareness’ to be present.”

Of course, no AI theorist would ever claim that even his Marvellous Algorithm doesn’t need to be enacted at the end of the day. In addition, he’d probably scoff at Penrose’s idea that

“the mere Platonic mathematical existence of the algorithm would be sufficient for its ‘awareness’ to be present”.

Yet surely Penrose does have a point.

If it’s literally all about algorithms or (in this essay’s case) virtual machines (which can be “multiply realised), then why can’t the relevant algorithms or virtual machines do the required job entirely on their own? That is, why don’t these abstract algorithms or virtual machines automatically instantiate consciousness (to be metaphorical for a moment) while floating around in their own abstract spaces?

To repeat. Penrose’s position can be expressed in simple terms.

If the strong AI position is all about algorithms or virtual machines, then literally any implementation of a Marvellous Algorithm or Marvellous Virtual Machine would bring about consciousness and (in Penrose’s own case) understanding.

More specifically, Penrose focuses on a single qualium (i.e., the singular of ‘qualia’). He writes:

“Such an implementation would, according to the proponents of such a suggestion, have to evoke the actual experience of the intended qualium.”

If the precise hardware doesn’t matter at all, then only the Marvellous Algorithm (or Marvellous Virtual Machine) matters. Of course, the Marvellous Algorithm (or Marvellous Virtual Machine) would need to be implemented… in something. Yet this may not be the case if we follow the strong AI position to its logical conclusion…

Or at least this conclusion can be drawn out of Roger Penrose’s own words.


Notes:

(1) What should we make of the logical-positive-type phrase “the concept zombie is incoherent”? (That’s if Margaret Boden is correctly expressing the positions of Daniel Dennett and Aaron Sloman.)

Well, there may well be very-strong arguments against the concept zombie. (Actually, arguments against what is said about philosophical zombies.) However, that concept doesn’t thereby become “incoherent”. (Or, as the logical positivists once put it about statements, it doesn’t thereby become “meaningless”.)

(2) It can’t be a solely functionalist or even verificationist position either, even if these two isms can easily run alongside behaviourism.


(*) The subject of qualia was mentioned a couple of times in the essay above. See my essay ‘Margaret Boden on Qualia and Artificial Intelligence (AI)’.