Thursday 19 January 2023

J.R. Lucas and Kurt Gödel Rage Against the Machines

The philosopher J.R. Lucas argued that all minds must be “alive” and (it can safely be assumed) human, not “dead” and “ossified” like “machines”. Lucas’s position is almost entirely dependent upon Kurt Gödel’s first incompleteness theorem.

J.R. Lucas (left) and Kurt Gödel.

(i) Alive Minds: Dead and Ossified Machines
(ii) J.R. Lucas’s Many Assumptions
(iii) A Single Theorem Destroys AI?
(iv) Conclusion

John S. Lucas (1929–2020) is well known for his paper ‘Minds, Machines and Gödel’, which will be focussed upon in this essay. More accurately, an often-quoted single passage from that paper will be discussed.

Alive Minds: Dead and Ossified Machines

Essentially, and perhaps a little retrospectively, J.R. Lucas’s argument is all about how a single theorem — Kurt Gödel’s first incompleteness theorem — destroys the possibility of artificial intelligence.

So could it be that artificial intelligence - perhaps simply strong AI— is rendered impossible by Gödel’s theorem?

However, the technical details of this theorem won’t be tackled in this essay. Instead, an often-quoted passage from J.R. Lucas will be focussed upon. This is done primarily because this passage clearly puts the whole debate in its purely philosophical (i.e., rather than logical and metamathematical) context…

Indeed, it’s not only the wider philosophical context of Lucas’s paper which needs to be tackled: it’s simply its wider… context. Full stop.

Here’s the passage from J.R. Lucas:

“We are trying to produce a model of the mind which is mechanical — which is essentially ‘dead’ — but the mind, being in fact ‘alive,’ can always go one better than any formal, ossified, dead system can. Thanks to Godel’s theorem, the mind always has the last word.”

The first thing that can be noted about this passage is how rhetorical and poetic it is, at least when bearing in mind that it’s part of Lucas’s academic paper, ‘Mind, Machines and Gödel (which was first published in 1959).

[Lucas concluded his paper with these words: “We can even begin to see how there could be room for morality [] No scientific enquiry can ever exhaust the infinite variety of the human mind.”]

Of course, some readers may see such phrases as “essentially ‘dead’”, “the mind, being in fact alive’”, “ossified”, “dead system”, “[t]hanks to Gödel’s theorem”, “the mind has the last word”, etc. as not being rhetorical at all. Such readers may believe that these phrases are simple (as it were) statements of fact. After all, machines are indeed dead and ossified, aren’t they?

On a different level, the words “thanks to Gödel” clearly show that Lucas had something to thank Gödel for.

So what was that?

Lucas thanked Gödel for proving that all minds must be alive (must be human?). And, to Lucas at least, much else followed from that.

Of course, we can accept that human minds must be alive — even if that’s an odd way of putting it. This means that Lucas was indirectly assuming that because human beings are alive, and minds (as it were) belong to human beings, then all minds (of whatever kind) must be alive too.

What’s more, Lucas would have presumably argued that after carefully analysing Gödel’s theorem and its repercussions, only then did he conclude that all minds must be alive. However, that line of reasoning doesn’t really show up in the passage above or even in his entire paper. Instead, it seems to be an inbuilt assumption on Lucas’s part.

In any case, perhaps Lucas’s rhetorical phrases above are of the kind you’d expect from (to use Lucas’s own words) “a dyed-in-the-wool traditional Englishman” who was also an Anglican, and the son of a Church of England clergyman. Of course, that can be taken as either being an ad hominem or as simple biography. Alternatively, it can be seen as a rhetorical response to Lucas’s very own rhetoric.

[Whatever it is, it’s only one sentence of an essay of 2,000 words. Incidentally, Douglas Hofstadter (in his famous book Gödel, Escher, Bach), commenting on the very same passage from Lucas, claimed that J.R. Lucas was expressing his “transient moment of anthropocentric glory”.]

It’s also worth noting, in this “anthropocentric” respect, that Lucas also applied Gödel’s theorem against the anthropic mechanism thesis; and, more specifically, against determinism as it’s applied to human beings (or to human minds).

The basic argument here is that precisely because (as we shall see later) “the mathematician” (whoever that is) can “see” Gödelian (unprovable) truths, then J.R. Lucas took that to mean that there must be at least one thing about human beings (or human minds) which can’t be predicted by computers… or by anything else (except God?).

[Why does it follow that even if a “logical system” (or computer) can’t “reliably predict” a human being’s actions, then that human being must have free will? Is free will really all about whether an individual’s actions can be predicted by a computer or by anything else? Of course, this much-discussed issue won’t be tackled here.]

J.R. Lucas’s Many Assumptions

In the introduction, the philosophical context of the quoted passage from John Lucas was mentioned. Indeed, Lucas realised that some of his words had a somewhat obvious wider philosophical context — if only some nine years after writing them. In other words, there’s little actual philosophy in his paper ‘Minds, Machines and Gödel’ (of 1959).

As already stated, the simple wider context of Lucas’s claims also needs to be tackled, regardless of his — hidden — philosophical assumptions.

So now it can be argued that the philosophical and moral positions and beliefs which motivated Lucas’s technical paper are hidden under a forest of metamathematical and logical terms and arguments which may not — or do not — have the philosophical, and, indeed, material consequences he believed they have.

So, in his paper ‘Satan Stultified’ (1968), Lucas made up for that previous philosophical deficit by wring the following:

“The application of Gödel’s theorem to the problem of minds and machines is difficult. Paul Benacerraf makes the entirely valid ‘Duhemian’ point that the argument is not, and cannot be, a purely mathematical one, but needs some philosophical premises to be able to yield any philosophical conclusions. Moreover, the philosophical premises are of very different kinds.”

So it can now still be said that Lucas did seem to assume much in the passage quoted at the beginning of this essay. That is, in order to have made such categorical claims, much else must have already been taken to be true and/or well defined. (Again, Lucas’s mathematical, metamathematical and logical analysis of Gödel’s actual theorem may well be fine and dandy.)

So there seem to be assumptions about the “mechanical”, about what it is to be “alive”, what work the word “ossified” is doing, etc. And, more importantly, assumptions about the applications and/or consequences of Gödel’s theorem.

To put it basically. In Lucas’s picture, if something isn’t a human mind, then it must dead and/or ossified. Or, to give him the benefit of the doubt, only the brains of biological creatures (or animals) can be “alive”.

Well, all that seems obviously true. No problem.

Yet we will see later that not all human minds can see the truth of any Gödel sentence. So what hope have cats and dogs, let alone worms, got?

So perhaps Lucas’s motivating stance wasn’t really about biological brains at all. (As it is with Roger Penrose — see my Is Physicist Roger Penrose a (Tacit) Panpsychist?’.) It was actually about human brains. Indeed, it might not even have been about human brains. It might purely have been about (Cartesian?) human minds! (So was Douglas Hofstadter right about Lucas’s “transient moment of anthropocentric glory”?)

A Single Theorem Destroys AI?

In my view, John Lucas’s paper was a classic example (or case) of Gödel’s theorem being overstretched.

It’s also another case of the theorem being used to advance philosophical and moral positions which the upholders seemed to have held anyway (or regardless).

Ironically, Lucas himself was fully aware of all this — at least after writing his well-known paper, ‘Minds, Machines and Gödel’. Elsewhere, he wrote:

“Gödel’s theorem itself, like many other truths, can be taken either way: it can be taken as a formal proof sequence yielding certain syntactical results about a certain class of formal systems, but it can also be taken as giving us a certain type or style of argument, which we can understand, and, once having got the hang of it, adapt and apply in innumerable different circumstances.”

To return to the theme of this essay and to repeat part of the introduction.

In essence, Lucas’s argument is all about how a single theorem — Gödel’s incompleteness theorem — destroys the possibility of artificial intelligence.

So could it possibly be that artificial intelligence — not even strong artificial intelligence — is rendered impossible by Gödel’s theorem?

Firstly, doesn’t Lucas’s Gödel-based argument — at least in a strong sense — render the minds of all non-mathematicians suspect too? After all, most human minds can’t recognise the truth of Gödel sentences! In fact, most mathematicians aren’t metamathematicians, so they too can’t recognise them.

So, if this is the right way of looking at this, then it also means that only some minds can discover unprovable truths. Or, more correctly, only some minds can find the truth of some Gödel sentences, but not the truth of other Gödel sentences.

Unless, that is, Lucas simply meant that all human minds have the potential to recognise (or see) Gödel truths.

So do all human minds have that potential?

But what does that mean?

And how could we know this?

What’s more, some commentators (philosophers, cognitive scientists, psychologists, etc.) are even suspicious of the notion of minds — or individual minds — gaining the unequivocal truth of formally unprovable Gödel sentences in the first place. And they’re certainly suspicious about the seeing of Gödel truths as having much — or even any — relevance for all minds.

The bottom line is that if a human or a computer/machine is consistent, then Gödel’s incompleteness theorems apply to it. So does Lucas’s argument depend on the existence (or reality) of perfectly consistent (indeed rational) mathematicians?

Well, many scientists and philosophers believe that human reasoning is inconsistent.

More importantly, doesn’t Lucas’s argument depend on the minds of only those metamathematicians who study Gödel’s theorems being fully consistent?

In addition, it’s not clear what the clause “a human mind cannot [or can!] formally prove its own consistency” means.

Prove in which sense? Prove in regards to what? Prove how a human mind deals with… everything? How a human mind deals with the whole of mathematics? A part of mathematics? Prove how a human mind can show the consistency and/or completeness (or lack thereof) of only logical and mathematical systems?…

Or is this simply about proof as it relates to human minds when they see Gödel truths?

Again, doesn’t Lucas’s argument actually depend on a tiny number of metamathematicians being able to see some (i.e., not all) Gödel truths?

[Lucas himself argued that women and politicians are inconsistent, as can be seen from this qualified version of an earlier claim of his. (See Lucas’s paper ‘Against Equality Again’, in which he says the same thing.) However, as can be seen in my essay: Lucas’s argument is more specific than that. On my reading at least, the only people who can transcend “machines” — at least in this Gödelian sense — is a tiny subset of mathematicians… like Lucas himself.]

Moreover, and as the philosopher Judson Webb argued (in his 1968 paper ‘Metamathematics and the Philosophy of Mind’), one also needs to ask questions about whether human beings (well, a small subset of mathematicians) can really see the truth of a Gödelian statement G (in this case, as it applies to oneself).

Perhaps a better questions would be: What is it to see a Gödel truth?

[See my ‘Platonist Roger Penrose Sees Mathematical Truths’.]

To sum up.

Conclusion

Immediately before the passage quoted at the beginning of this essay, J.R. Lucas wrote the following:

“However complicated a machine we construct, it will, if it is a machine, correspond to a formal system, which in turn will be liable to the Godel procedure for finding a formula unprovable-in-that-system. This formula the machine will be unable to produce as being true, although a mind can see it is true.”

So must all machines and/or computers “correspond to a formal system”?

That depends. There may be much more to it than that.

Different philosophers and even different logicians/mathematicians (or at least philosophers of logic and metamathematicians) take different views on all this.

For example, when it comes to computers and machines, philosophers such as David Chalmers emphasise the “causal heft” and innards of computers. That is, how programmes or “formal systems” are instantiated in something physical. (See my ‘Chalmers, Penrose and Searle on the (Implicit) Platonism and Dualism of Algorithmic AI’.) Yet, ironically, many AI theorists themselves ignore all this, and, in that sense, support J.R. Lucas’s Cartesian conception of both minds and computers.

My flickr account.


No comments:

Post a Comment