Tuesday, 11 January 2022

Chalmers, Penrose and Searle on the (Implicit) Platonism and Dualism of Algorithmic AI

Is AI literally all about algorithms? 

Top: David Chalmers and John Searle. Bottom: Roger Penrose

In many discussions of artificial intelligence (AI) it’s almost as if many — or even all — AI theorists and workers in the field believe that disembodied algorithms and computations alone can in and of themselves bring about mind, consciousness and understanding. (The same can be said, though less strongly, about the functions of the functionalists.) This (implicit) position — as the philosopher John Searle once argued (more of which later) - is a kind of contemporary dualism in which abstract objects (i.e., computations and algorithms) bring about mind, consciousness and understanding on their own. Abstract algorithms, then, may well have become the contemporary version of Descartes’ mind-as-a-non-physical-substance — at least according to Searle.

It’s hard to even comprehend how anyone could believe that an algorithm or computation alone could be a candidate for possession of (or capable of bringing about) a (to use Roger Penrose’s example) conscious state or consciousness itself. (Perhaps no one does believe that.) In fact it’s hard to comprehend what that could even mean. Yet when you read (some/much) AI literature, that appears to be exactly what various theorists and workers in the field believe.

Of course no AI theorist would ever claim that his (as it were)Marvellous Algorithms don’t actually need to be implemented. Yet if the material (or nature) of the implementation is irrelevant, then isn’t implementation in toto irrelevant too?

So if we do have the situation of AI theorists emphasising algorithms or computations at the expense of literally everything else, then this ends up being, as Roger Penrose will argue later, a kind of Platonism (or abstractionism) in which implementation is either, at best, secondary; or, at worst, completely irrelevant.

Another angle on this issue is to argue that it’s wrong to accept this algorithm-implementation “binary opposition” in the first place. This means that it isn’t at all wrong to concentrate on algorithms. It’s just wrong to completely ignore the “physical implementation” side of things. Or, as the philosopher Patricia Churchland once stated, it’s wrong to completely “ignore neuroscience” (i.e., brains and biology) and focus entirely on algorithms, functions, computations, etc.

So, at best, surely we need the (correct) material implementation of such abstract objects.

AI’s Platonic Algorithms?

Let me quote a passage in which the mathematical physicist and mathematician Roger Penrose (1931-) raises the possibility that strong artificial intelligence theorists are implicitly committed to at least some kind of Platonism.

[Strong AI is defined in various different ways. Philosophers tend to define it in one way and AI theorists in another way. See artificial general [i.e., strong] intelligence.]

Firstly, Penrose raises the issue of the physical “enaction” and implementation of a relevant algorithm:

“The issue of what physical actions should count as actually enacting an algorithm is profoundly unclear.”

Then this problem is seen to lead — logically — to a kind of Platonism. Penrose continues:

“Perhaps such actions are not necessary at all, and to be in accordance with view point A, the mere Platonic mathematical existence of the algorithm would be sufficient for its ‘awareness’ to be present.”

Of course no AI theorist would ever claim that even a Marvellous Algorithm doesn’t need to be implemented and enacted at the end of the day. In addition, he’d probably scoff at Penrose’s idea that the “the mere Platonic mathematical existence of the algorithm would be sufficient for its ‘awareness’ to be present”.

Yet surely Penrose has a point.

If it’s literally all about algorithms (which can be — to borrow a term from the philosophy of mind - multiply realised), then why can’t the relevant algorithms do the required job entirely on their own? That is, why don’t these abstract algorithms automatically instantiate consciousness (to be metaphorical for a moment) while floating around in their abstract spaces?

In any case, Penrose’s position can be expressed in very simple terms.

If the strong AI position is all about algorithms, then literally any implementation of a Marvellous Algorithm (or Set of Magic Algorithms) would bring about consciousness and understanding.

More specifically, Penrose focuses on a single qualium. (The issue of the status, existence and reality of qualia will be ignored this piece.) He writes:

“Such an implementation would, according to the proponents of such a suggestion, have to evoke the actual experience of the intended qualium.”

If the precise hardware doesn’t at all matter, then only the Marvellous Algorithm matters. Of course the Marvellous Algorithm would need to be implemented… in something. Yet this may not be the case if we follow the strong AI position to its logical conclusion. At least this conclusion can be drawn out of Penrose’s own words.

So now let’s tackle the actual implementation of Marvellous Algorithms.

The Implementation of Algorithms

Roger Penrose states that

[s]uch an implementation [of a “clear-cut and reasonably simple algorithmic suggestion”] would, according to the proponents of such a suggestion, have to evoke the actual experience of the intended qualium”.

It’s hard to tell what that ostensible AI position could even mean. Of course that’s only if Penrose is being fair to AI theorists in his account. In other words, surely it can’t possibly be the case that an implementation (or “enaction”) of any algorithm (even if complex rather than “simple”) could in and of itself “evoke” an actual experience or evoke anything at all — a qualium or otherwise. Again, it’s hard to understand what all that could mean.

Of course if we accept that the human brain does implement algorithms and computations, then our fleshy “hardware” (or wetware) already does evoke consciousness.

So are we talking about a single algorithm here? Perhaps. However, even if multiple algorithms (which are embedded — or part of — a larger algorithm) are being discussed, implementation — or at least implementation alone — still can’t be the whole story. The whole story must surely depend on the nature of the hardware (whether non-biological, biological or otherwise), how the algorithms are actually implemented and on many other (non-abstract) factors.

There’s another problem with what Penrose says (which may just be one of his wording) when he writes the following:

“It would be hard [] to accept seriously that such a computation [] could actually experience mentality to any significant degree.”

Surely AI theorists don’t argue that a relevant algorithm (or set of algorithms) “could actually experience mentality”: they argue that such an algorithm brings about, causes or whatever “mentality”. How on earth, in other words, could an abstract entity — an algorithm — actually experience mentality? If anything, this is a category mistake on Penrose’s part.

Of course Penrose might have simply meant this:

algorithm + implementation = the experience of mentality

Yet even here there there are philosophical problems.

Firstly, what would it be that (to use Penrose’s words again) “experiences mentality”? The algorithm itself or the algorithm-implementation fusion? Could that fusion actually be an experience? Or would something else — such as a person or any physical entity — be needed to actually “have” (or instantiate) that experience?

As stated in the introduction, Roger Penrose and John Searle have accused (if that’s the right word) AI theorists of dualism and Platonism; and AI theorists have returned the favour by accusing Penrose and Searle of exactly the same isms.

So who exactly are the Platonists and dualists… and who are the physicalists?

Is AI Physicalist?

It’s ironic that Penrose should state (or perhaps simply hint) that his own position on consciousness is better described as “physicalist” (see physicalism) than the position of strong AI theorists. He puts this in the following passage:

“According to A, the material construction construction of a thinking device is regarded as irrelevant. It is simply the computation that it performs that determines all its mental attributes.”

Then comes another accusation of Platonism and indeed of dualism:

“Computations themselves are pieces of abstract mathematics, divorced from any association with particular material bodies. Thus, according to A, mental attributes are themselves things with no particular association with physical objects [].”

Thus Penrose seems to correctly conclude (though that’s only if we accept his take on what what AI theorists are implicitly committed to) by saying “so the term ‘physicalist’ might seem a little inappropriate” for this AI position. Penrose’s own position, on the other hand,

“demand[s] that the actual physical constitution of an object must indeed be playing a vital role in determining whether or not there is genuine mentality present in association with it”.

Of course all this (at least in regards both Penrose’s position and that of strong AI) seems like a reversal of terminology. Penrose himself recognises this and says that “such terminology would be at variance with some common usage” — certainly the common usage of many analytic philosophers.

In any case, the biologist and neuroscientist Gerald Edelman (1929–2014) takes a similar position to Penrose — at least when it comes to his emphasis on biology and the brain.

For example, Edelman once said that mind and consciousness

“can only be understood from a biological standpoint, not through physics or computer science or other approaches that ignore the structure of the brain”.

And then there’s the philosopher John Searle’s position.

Searle himself doesn’t spot an implicit Platonism in AI theory (as does Penrose): he spots an implicit dualism. Of course this is ironic because many AI theorists, philosophers and functionalists have accused Searle of being a “dualist” (see here).

Searle’s basic position is that if AI theorists, computationalists or functionalists dispute — or simply ignore — the physical and causal biology of brains and exclusively focus on syntax, computations/algorithms and functions, then that will surely lead to at least some kind of dualism. In other words, Searle argues that AI theorists and functionalists set up a radical disjunction between the actual physical (therefore causal) reality of the brain when they explain — or account for — intentionality, mind, consciousness and understanding.

So Searle’s basic position on all this is stated in the following:

i) If Strong AI proponents ignore (or play down) the physical biology of brains; and, instead, focus exclusively on syntax, computations/algorithms or functions,
ii) then that will surely lead to some kind of dualism in which non-physical (i.e., abstract) objects play the role of Descartes’ non-physical (i.e., “non-extended”) mind.

Again: Searle is noting the radical disjunction which is set up between the actual physical reality of biological brains and how these philosophers, scientists and theorists actually explain — and account for — mind, consciousness and understanding.

We also have John Searle’s position as it’s expressed in the following:

“If mental operations consist of computational operations on formal symbols, it follows that they have no interesting connection with the brain, and the only connection would be that the brain just happens to be one of the indefinitely many types of machines capable of instantiating the program.”

Now for the dualism:

“This form of dualism is not the traditional Cartesian variety that claims that there are two sorts of substances, but it is Cartesian in the sense that it insists that what is specifically mental about the brain has no intrinsic connection with the actual properties of the brain. This underlying dualism is masked from us by the fact that AI literature contains frequent fulminations against ‘dualism’.”

Despite all the words above, Searle doesn’t believe that only biological brains can give rise to understanding and consciousness. Searle’s position is that — empirically speaking — only brains do give rise to understanding and consciousness. So he’s emphasising what he takes to be an empirical fact. That is, Searle isn’t denying the logical — and even metaphysical — possibility that other entities can bring forth minds, consciousness and understanding.

Roger Penrose on Abstract-Concrete Isomorphisms

Is there some kind of isomorphic relation between the (as it were) shape of the abstract algorithm and the shape of its concrete implementation? Penrose himself asks this question a little less metaphorically. He writes:

“Does ‘enaction’ mean that bits of physical material must be moved around in accordance with the successive operations of the algorithm?”

Yet surely this does happen in countless cases when it comes to computers. (In this case, electricity is being “moved around”, transistors are opened and shut, etc.) Or as David Chalmers puts it in the specific case of simulating (or even mimicking) the human brain:

[I]n an ordinary computer that implements a neuron-by-neuron simulation of my brain, there will be real causation going on between voltages in various circuits, precisely mirroring patterns of causation between the neurons.”

So Penrose isn’t questioning these successful implantations and enactions in computers (how could he be?) but simply saying that material must matter. Alternatively, does Penrose believe that implementations and enactions don’t makes sense only when it comes to the singular case of consciousness?

The Australian philosopher David Chalmers (1966-) offers some help here.

David Chalmers on Causal Structure

Take a recipe for a meal.

To David Chalmers, the recipe is a “syntactic object”. However, the meal itself (as well as the cooking process) is an “implementation” which occurs in what he calls the “real world”.

Chalmers also talks about “causal structure” in relation to programmes and their physical implementation. Thus:

“Implementations of programs, on the other hand, are concrete systems with causal dynamics, and are not purely syntactic. An implementation has causal heft in the real world, and it is in virtue of this causal heft that consciousness and intentionality arise.”

Then Chalmers delivers his clinching line:

“It is the program that is syntactic, it is the implementation that has semantic content.”

More clearly, a physical machine is deemed to belong to the semantic domain and a syntactic programme is deemed to be abstract. Thus a physical machine is said to provide a “semantic interpretation” of the abstract syntax.

Yet how can the semantic automatically arise from an implementation of that which is purely abstract and syntactic?

Well… that depends.

Firstly, it may not automatically arise. And, secondly, it may depend on the nature of the implementation as well as physical material used for the implementation.

To go back to Roger Penrose’s earlier words on “enaction”. He asked this question:

“Does ‘enaction’ mean that bits of physical material must be moved around in accordance with the successive operations of the algorithm?”

And it’s here that — in Chalmers’ case at least — we arrive at the importance and relevance of “causal structure”.

So it’s not only about implementation: it’s also about the fact that any given implementation will have a certain causal structure. And, according to Chalmers, only certain (physical) causal structures will (or could) bring forth consciousness, mind and understanding.

Does the physical implementation need to be (to use a word that Chalmers himself uses) an “isomorphic” kind of mirroring (or a precise “mapping”) of the abstract? And if it does, then how does that (in itself) bring about the semantics?

One can see how vitally important causation is to Chalmers when he says that “both computation and content should be dependent on the common notion of causation”. In other words, an algorithm or computation and a given implementation will share a causal structure. Indeed Chalmers cites the example of a Turing machine when he says that “we need only ensure that this formal structure is reflected in the causal structure of the implementation”.

Chalmers continues:

“Certainly, when computer designers ensure that their machines implement the programs that they are supposed to, they do this by ensuring that the mechanisms have the right causal organization.”

In addition, Chalmers tells us what a physical implementation is in the simplest possible terms. And, in his definition, he again refers to “causal structure”:

“A physical system implements a given computation when the causal structure of the physical system mirrors the formal structure of the computation.”

Then Chalmers goes into more detail:

“A physical system implements a given computation when there exists a grouping of physical states of the system into state-types and a one-to-one mapping from formal states of the computation to physical state-types, such that formal states related by an abstract state-transition relation are mapped onto physical state-types related by a corresponding causal state-transition relation.”

Chalmers also stresses the notion of (correct) “mapping”. And what’s relevant about mapping is that the “causal relations between physical state-types will precisely mirror the abstract relations between formal states”. Moreover:

“What is important is the overall form of the definition: in particular, the way it ensures that the formal state-transitional structure of the computation mirrors the causal state-transitional structure of the physical system.”

Chalmers also states the following:

“While it may be plausible that static sets of abstract symbols do not have intrinsic semantic properties, it is much less clear that formally specified causal processes cannot support a mind.”

In Chalmers’ account, then, the (causal) concrete does appear to be vital in that the “computational descriptions are applied to physical systems [because] they effectively provide a formal description of the systems’ causal organisation”.

So what is it, exactly, that’s being described?

According to Chalmers, the physical (or concrete) “causal organisation” is being described. And when described, it becomes an “abstract causal organisation”. (Is the word “causal” at all apt when used in conjunction with what is abstract?) However, the causal organisation is abstract in the sense that all peripheral non-causal and non-functional aspects of the physical system are simply factored out. Thus all we have left is an abstract remainder. Nonetheless, it’s still a physical (or concrete) system that provides the (as it were) input and an abstract causal organisation (captured computationally) that effectively becomes the output.

Chalmers develops his theme. He writes:

“It is easy to think of a computer as simply an input-output device, with nothing in between except for some formal mathematical manipulations.”

However:

“This was of looking at things [] leaves out the key fact that there are rich causal dynamics inside a computer, just as there are in the brain.”

Chalmers has just mentioned the human brain. Indeed he discusses the “mirroring” of the brain in non-biological physical systems. Yet many have argued that the mere mirroring of the human brain defeats the object of AI. However, since this raises its own issues and is more particular than the prior discussion about the relation between abstract algorithms and their concrete implementations, nothing more will be said about the brain here.

Conclusion

As stated in the introduction, it’s of course the case that most — or perhaps all — adherents of Strong AI would never deny that their abstract objects (i.e., algorithms and computations) need to be implemented in the (to use Chalmers’ words) “physical world”. That said, the manner of that implementation (as well as the nature of the physical material which does that job) seems to be seen as almost — or even literally — irrelevant to them. It’s certainly the case that brains and biology are often played down. (There are AI exceptions to this.)

Yet it must be said that not a single example of AI success has been achieved without implementation. Indeed that seems like a statement of the blindingly obvious! However, Roger Penrose, John Searle and David Chalmers are focussing on mind, consciousness and understanding and tying such things to biology and brains. So even though innumerable algorithms have been successfully implemented in innumerable physical/concrete ways (ways which we experience many times in our everyday lives — from our laptops to scanning devices), when it comes to mind, consciousness and understanding (or “genuine intelligence” in Penrose’s case), things may be very different. In other words, there may be fundamental reasons as to why taking a Platonic position (as, arguably, most AI theorists and workers do) on algorithms and computations will come up short when it comes to consciousness, mind and understanding.

Penrose particularly stresses the biology of consciousness in that he notes the importance of such things as microtubules and (biologically-based) quantum coherence. Chalmers stresses causal structure. (He doesn’t tie causal structure exclusively to human brains; though he does believe — like Patricia Churchland — that many AI theorists ignore it.) And Searle most certainly does stress brains, biology and causation.

And now it must be stated that Chalmers, Penrose and Searle don’t actually deny the possibility that understanding and consciousness may be successfully instantiated by artificial entities in the future. What they do is offer their words of warning to AI theorists and workers in the field.

So, to finish off, let me quote a passage from Penrose in which he provides some hope for AI theorists — though only if they take on board the various fundamental facts (as he sees them) about animal brains. Penrose writes:

[I]t should be clear [] that I am by no means arguing that it would be necessarily impossible to build a genuinely intelligent device [].

But then comes large but (or qualification):

[S]o long as such a device were not a ‘machine, in the specific sense of being computationally controlled. Instead it would have to incorporate the same kind of physical action that is responsible for evoking our own awareness. Since we do not yet have any physical theory of that action, it is certainly premature to speculate on when or whether such a putative device might be constructed.”

As can quickly be seen, Penrose’s hope-for-AI may not in fact amount to much — at least not if one accepts Penrose’s own arguments and positions.

So whatever the case is, Penrose, Chalmers and Searle argue — in their own individual ways — that biology, brains and causation are indeed important when it comes to strong (i.e., not weak) artificial intelligence.

[I can be found on Twitter here.]

Saturday, 8 January 2022

Graham Priest’s Radical Dialetheic Logic and Reality

 Are there “true contradictions” in the world?

The word dialetheism comes from the Greek δι (di- ‘twice’) and ἀλήθεια (alḗtheia ‘truth’). It’s the view that there are some statements which are both true and false. In other words, it’s the view that there can be a true statement whose negation is also true. In the literature, these statements are called “true contradictions” or (to use Graham Priest’s neologism) dialetheia.

The philosopher and logician Graham Priest (1948-) is the main proponent of dialetheism.

Priest is an Australian who now lives in New York. He’s written many books and had numerous articles published in journals of philosophy and logic. Priest is also the Distinguished Professor of Philosophy at the CUNY Graduate Center and an “honorary” professor at the University of Melbourne. He was educated at the University of Cambridge and at the London School of Economics.

Priest also practices karate and tai chi. His interest in Eastern philosophy — particularly Buddhist logic and philosophy (see here) — has strongly influenced his dialetheism.

Dialetheism

Dialetheism isn’t a formal logic. It’s (as Graham Priest himself puts it) “a theory of truth”. And this emphasis leads dialetheists to construct a logic which deals with truth. Or, at the least, with truth as it’s thrown up primarily by various paradoxes in set theory and elsewhere. Indeed Graham Priest himself states that

“the whole point of the dialetheic solution to the semantic paradoxes is to get rid of the distinction between object language and meta-language”.

(Clearly, this is a Tarski-influenced focus on the nature of truth.)

In what follows I question the truth-for-dialetheism issue by asking questions about dialetheism’s relation to (to use Priest’s word) “reality”.

¬Boolean Negation

Firstly, Graham Priest makes the claim that

[e]ven dialetheists, after all, need to show that they don’t accept 1 = 0”.

On the surface it would seem that if dialetheists don’t — or won’t — accept 1 = 0, then neither must they accept ¬A A. And this isn’t simply the case because one proposition involves numbers and the equality sign, and the other doesn’t.

Priest clarifies this 1 ≠ 0/¬A A dichotomy when he tackles Boolean negation.

According to Priest:

[I]F not-A is compatible with A, then asserting not-A cannot constitute a denial.”

A classical logician may be dumbfounded by the claim that “¬A is compatible with A”. However, that’s because dialetheists have their own take on the issue of negation and denial. Priest explains matters and what he says, at least at first sight, seems to make sense. That is, if we “deny A one must assert something which is incompatible with it”. True. So this must mean that on Priest’s view it’s the case that ¬A isn’t incompatible with A: it must be… compatible with it.

Thus, to Priest, the ¬A in A ¬A isn’t an example of Boolean negation.

So is it an example of any kind of negation?

It’s hard to say. In any case, the dialetheic ¬A, according to Priest, “cannot constitute a denial”.

Priest repeats his position by saying that “[t]o deny A is simply to assert its negation”. Thus a dialetheic ¬A isn’t the same as a Boolean ¬A.

So what does Priest mean by “compatible” (or “incompatible”)?

Negation and Dialetheic Consistency

Priest explains his position in this way:

[W]e all, from time to time, discover that our views are, unwittingly, inconsistent. A series of questions prompts us to assert both A and ¬A for some A.”

Prima facie, this seems to be the case. That is, perhaps it’s wise to “assert” both A and ¬A if both appear to be true. Alternatively, we may assert A and ¬A if both have equal evidential, logical or philosophical weight. Nonetheless, at the end of the day most people will hope that either A or ¬A will be shown to be true (or perhaps simply to be more acceptable). Thus our initial acceptance of both A and ¬A certainly isn’t also a commitment to any (as Priest puts it) “inconsistency of the world”. In other words, our acceptance of both A and ¬A tells us more about us than it tells us about reality or the world.

For example, there may be equal evidence (if there could ever be an absolute equality of evidence) for these two statements:

(1) “Jones was murdered”
(2) “Jones killed himself.”

Of course it couldn’t have been the case that Jones was both murdered and he killed himself.

So let Priest continue.

He asks us this question: “Is the second assertion [i.e., ¬A] a denial of A?” Yes, it certainly seems to be. Yet Priest disagrees. Indeed Priest finishes off by saying that ¬A

“is conveying the information that one accepts ¬A, not that one does not accept A”.

In terms only of classical logic, what Priest states seems to be false. However, if we return to the two statements mentioned earlier — then, yes, my acceptance of the statement “Jones was murdered” may not also mean that I must also deny the statement “Jones killed himself”. In tandem with my remarks about equal evidential or logical/philosophical weight, my provisional acceptance of the statement “Jones was murdered” doesn’t mean that I will — or that I must — also outrightly deny (or reject) the statement that “Jones killed himself”.

I’ve cheated a little here in order to make things simpler.

For a start, statements (or propositions) aren’t usually symbolised with the capital letters (or symbols) A and B. My two statements (or propositions), in the tradition, are usually symbolised by the letters p and q. Thus A and its negation aren’t statements (or propositions): they are states of affair. (A state of affairs need not “obtain” or be actual.) In this case, we have the (past) state of affairs Jones being murdered and the state of affairs Jones killing himself — one of which must have obtained. (It can still be argued that these two states of affairs must still expressed by statements; and, indeed, by natural-language sentences.)

What’s more, the statement “Jones killed himself” isn’t even a strict and clear-cut negation of “Jones was murdered” (i.e., it’s not a Boolean negation). A strict and clear-cut negation of “Jones was murdered” should be “It is not the case that Jones was murdered”, not “Jones killed himself”. Of course the statement “Jones killed himself” can be seen as some sort of denial — or rejection — of the statement “Jones was murdered”; though it’s not a strict (Boolean) negation.

So perhaps all this partly explains Priest’s position of rejecting Boolean negation as well as accepting a dialetheic… what?

In other words, if the symbolismA ¬A” is seen as including only the autonym “A” (i.e., a self-referential symbol without content), then clearly it can’t be accepted. Only they aren’t autonyms in Priest’s book — they have content. That is, the symbolism (i.e., mention, not use) “A ¬A” expresses the dialetheic acceptance of “contradictories” in reality (or in the world).

Consistency and the World

The American philosopher and logician Bryson Brown (in his paper ‘On Paraconsistency’) also states the importance of consistency (actually, inconsistency) for dialetheism. (Informatively, he also tells us that dialetheists are “radical paraconsistentists”.) He writes:

[Dialetheists] hold that the world is inconsistent, and aim at a general logic that goes beyond all the consistency constraints of classical logic.”

Deriving the notion of an inconsistent world from our psychologistic and/or epistemological limitations (as well as from accepted notions in the philosophy of science) is problematic. (See psychologism.) In other words, the epistemological conclusion of an inconsistent world can’t also be applied to the world itself. This statement, however, may well be deemed to be a crude and naive metaphysically-realist position. That said, if one self-consciously “goes beyond all [ ] consistency”, then that surely implies that, in their heart of hearts, dialetheists know that the world is still either consistent or neither consistent nor inconsistent. (See the later words on Spinoza.)

Another way to put all this is in terms of some of the paradoxes of set theory. Bryson Brown says that

“the dialetheists take paradoxes such as the liar and the paradoxes of naïve set theory at face value”.

So it may be the case that dialetheists choose — for logical and/or epistemological reasons — to accept paradoxes even though they also believe that, ultimately, they aren’t true of the world. Then again, Brown continues by saying that dialetheists “view these paradoxes as proofs that certain inconsistencies are true”…

But true of what?

True only of the logical and linguistic expressions of the paradoxes or true of the world itself?

Again, this stress on the world (or reality) may betray a naïve, crude and, perhaps, an old-fashioned view of logic. Nonetheless, Priest himself does mention “reality” when he talks of consistency and inconsistency.

For example, when discussing the virtue of simplicity, Priest asks the following question:

“If there is some reason for supposing that reality is, quite generally, very consistent — say some sort of transcendental argument — then inconsistency is clearly a negative criterion. If not, then perhaps not.”

As it is, how can the world be either inconsistent or consistent? Indeed a position can be taken on this which is similar — or parallel — to Baruch Spinoza’s philosophical point that the world can only… well, be. The following is how Spinoza himself put it:

“I would warn you that I do not attribute to nature either beauty or deformity, order or confusion. Only in relation to our imagination can things be called beautiful or ugly, well-ordered or confused.”

Here, yet again, this may be betraying an implicit (naïve) realism. Yet it must still be stated that what we say about the world (whether via science, philosophy, mathematics or logic) may well be consistent or inconsistent. (We may also say, with Spinoza, that the world is “beautiful” or “ugly”.) However, the world itself can be neither consistent nor inconsistent. Thus, it seems to follow, that inconsistency is neither a “negative criterion” nor a positive criterion. The only way out of this, as far as can be seen, is if Priest squares his dialetheic logic with findings and/or positions in metaphysics and in science; which indeed he does from time to time. (For example, take Priest’s references to quantum mechanics and to Buddhist philosophy/logic.)

Science was mentioned a moment ago and perhaps science (or at least physics) is coming to Priest’s rescue here.

There’s a lot of talk of simplicity and consistency (along with the other positive criteria which play a part in theory choice) when it comes to scientific theories. Priest too, without actually mentioning science, also says that “simplicity and consistency may well pull in opposite directions”. More specifically, “a high degree of simplicity may outweigh a low degree of inconsistency”. Thus, if this is applied to scientific theories (or, more simply, to theories generally), then it must also apply to the logic/s which account for (or make sense of) such theories. Priest must therefore believe that dialetheic logic does a good job of capturing these movements in “opposite directions”. Nonetheless, stressing simplicity (at the expense of consistency), or consistency (at the expense of simplicity), doesn’t seem to entail — or even imply — an inconsistent reality or even any specifically dialetheic position. Alternatively, there’s nothing hidden in these positions which hints at the fact that it may be “rational to accept a contradiction”. That said, Priest does go on to say that

“there is nothing to stop the person accepting both their original view and the objection put to it, which is inconsistent with it”.

As pointed out earlier, talk of acceptance or non-acceptance is psychologistic and/or epistemological in nature, not logical or ontological. In other words, the predicaments of our epistemological and psychological positions shouldn’t be read into the world.

Finally, Priest’s point about a “transcendental argument” (see transcendental arguments) is apt because it can be said that in order for the world to be fully understood (in, say, a Kantian manner) it would also need to be consistent. However, Priest could either say that reality isn’t fully understood or that such a transcendental argument isn’t needed in the first place. That may be the case because we can happily accept that the world is inconsistent without engendering too many problems. And the way we could bypass any (serious) problem is by utilising dialetheic logic!

The Dialetheic Use of Possible Worlds

It’s difficult to tell how much Graham Priest depends on the notion — or existence! — of possible worlds to justify his dialetheism.

Take this example of Priest’s possible-worlds talk:

“It might be thought that the fact that ¬(A ¬A) holds at a world entails that one or other of A and ¬A fails; but this does not necessarily follow.”

Is Priest saying that ¬(A¬A) holds at the actual world (i.e., our world); though not “necessarily” at all possible worlds? Indeed considering the aforementioned views on negation and inconsistency, it may hold at any — or every — possible world too! At other possible worlds, Priest believes that A¬A can hold because of his views on both negation and consistency. This means that the dialetheic A¬A is not necessarily false.

Priest offers us the following symbolisation of his position:

¬A is true at w iff is A false at w.
¬A is false at w iff A is true at w.

So what about Priest’s prior A¬A?

According to Priest, “it is possible for A to be both true and false at a world”. That, of course, is the dialetheic position. Does it require possible worlds when Priest, elsewhere, has already said that it’s also applicable at our world — the actual world?

Not surprisingly, Priest concedes that “it is natural to ask whether there really are possible worlds at which something may be both true and false”. Priest thinks that this is a “fair question”. Nonetheless, he also thinks that “the best reasons for thinking this to be possible are also reasons for thinking it to be actual”. That seems to follow from his possible-worlds logic. That is, since possible worlds are infinite in number, we can argue the following:

i) If it’s possible for A ∧ ¬A to be true at least one possible world,
then, with an infinite amount of worlds,
ii) it’s also possible that A ∧ ¬A is true at our actual world.

If we move away from from scientific and set-theoretical reasons for embracing dialetheism, the acceptance of ¬A A can also be justified by the claim — which some (or all) dialetheists have made — that logic can’t prove anything outside itself (as it were). That’s because anything is possible at a possible world. (Yes; though what about the actual world?) In terms of the epistemology of dialetheism (which was mentioned earlier), dialetheists also subscribe to the traditional (Cartesian) view that nothing is certain except our own experiences and mental states.

*****************************

Notes

(1) Graham Priest does make an (or even the) important distinction between paraconsistent logic and dialetheic logic. In terms of paraconsistent logic, Priest states that contradictories

“may [be] set [ ] together in possible worlds, to provide paraconsistent logics, logics which allow for the sensible handling of inconsistent information and theories”.

In terms of dialetheism, on the other hand, logics

“may set contradictories together in the actual world, to allow for things such as a simple and natural theory of truth”.

(2) Bryson Brown puts the position of dialetheism in terms of the Liar Paradox. This paradox can be formulated in this manner:

L: This sentence is false.

Through a convoluted logical procedure (which is convincing if we take L as it stands), Bryson comes to the conclusion that “L is true if and only if L is false”. From there he also concludes that “it follows that L must be both” true and false. Of course accepting the nature of the Lair Paradox (in any of its forms) isn’t the same as accepting the dialetheic position. But let’s leave that there.

My own problem with with L itself is that it has no (semantic) content. And if it has no content, then that may be partly — or even wholly — responsible for the paradox itself.

In any case, it can happily be admitted that most logicians and philosophers disagree with any fixation on L’s semantic content; though some do agree with it. The other thing is that other formulations of the Liar Paradox — such as “Everything the Liar says is false” and “Everything I say is false” — seem less problematic from the point of view of semantic content.

[I can be found on Twitter here.]

Sunday, 2 January 2022

Ludwig Wittgenstein on Mathematical Truth


 

“A mathematician is bound to be horrified by my mathematical comments. [] He has learned to regard them as something contemptible []."

Wittgenstein (in his Philosophical Grammar, 1932)

Much of what Ludwig Wittgenstein (1889–1951) wrote is hard to decipher. And that’s the primary reason why there’s what can (sarcastically) be called a Wittgenstein Interpretation Industry. This also explains why many loyal Wittgensteinians get so hot under the collar when other commentators “get Wittgenstein wrong!”. Indeed, unlike many other philosophers, much of the debate around Wittgenstein’s work isn’t about whether or not what he wrote is true or false, well argued or badly argued, worthwhile or worthless, etc. — but about what he actually meant.

Lee Braver (in his Groundless Grounds: A Study of Wittgenstein and Heidegger) puts all this better when he wrote the following words:

[Wittgenstein’s] writing style is perhaps the most obscure of all the great analytic figures, leading to an unusual state of affairs: ‘one of the most striking characteristics of the secondary literature on Wittgenstein is the overwhelming lack of agreement about what he believed and why.’ Already in 1961, the literature on the Tractatus was compared to literary scholarship in dissension and sheer mass. His opaque prose and sparse argumentation have given rise to a cottage industry of exegetical work and scholarly contention [].”

Thus all that is but a preamble to the essay which follows. It can also be said that I’m getting my excuses out of the way before I begin.

Mathematical Truth and Mathematical Correctness

“The terms ‘sense’ and ‘nonsense’, rather than the terms ‘true’ and ‘false’, bring out the relation of mathematical to nonmathematical propositions.”

Wittgenstein (Lectures, Cambridge 1932–35)

The “late Wittgenstein” argued (at least to paraphrase or even to interpret) that if the operations on numbers result in truths, then shouldn’t it also be the case that the numbers themselves (in whichever order) have truth-conditions or references? (Perhaps the term reference is better suited here.) In basic terms, shouldn’t each number correspond or refer to something? But if the numbers themselves have no truth-conditions or references, then how can the concept of truth be carried over to the operations on numbers? Surely you can’t have one without the other.

And all that is largely why Wittgenstein emphasised what he called “correctness” (as well as mathematical “grammar”) rather than truth.

The operations on numbers also fall under Wittgenstein’s meaning is use theory.

So does the correct use of numbers also imply truth?

There may be a correct and incorrect way to operate on numbers; though is there a true way to operate on numbers? The words “truth” and “correctness” aren’t, after all, synonyms.

Wittgenstein believed that correctness is determined by rules; which are themselves a product of persons, conventions and and/or language games. Truth, on the other hand, has been deemed by many philosophers, mathematicians and laypeople to be separable from minds, conventions, etc. (i.e., as in the various philosophical realisms).

Wittgenstein himself wrote the following on rules and their role in mathematics:

“Because they all agree in what they do, we lay it down as a rule, and put it in the archives. Not until we do that have we got to mathematics. One of the main reasons for adopting this as a standard, is that it’s the natural way to do it, the natural way to go — for all these people.”

Thus, according to Wittgenstein, we can count in a correct way; though not in a true way. Truth could only enter the equation if numbers themselves were true of something else or if they referred to something else. In other words, the inscriptions or symbols must have entities to which they can correspond or refer. Only then, on a Platonist picture at least, could we talk of truth in mathematics.

Of course many people intuitively feel that there must be more to mathematics than mere inscriptions/symbols on the page and the correct rules (or correct mathematical “grammar”) for using those inscriptions/symbols. (This position is usually called formalism; though Wittgenstein went way beyond, say, David Hilbert’s formalism.) But do we think in the same way when we play a game of chess? Do we expect the pieces and the moves to somehow correspond or refer to things (whether people or events) external to the actual game of chess? It can be granted that if someone takes chess literally, then he may well think in terms of the pieces and moves corresponding — or referring - to real historical battles, political situations and well-known historical persons. (This may well be the case for certain individuals.) However, such correspondence-relations aren’t in fact needed when one plays chess. Indeed chess can be seen in purely abstract terms despite the fact that we play the game with castles, pawns, bishops, etc. More abstract forms (which have no correspondents or referents in — or to — the external world) could easily become the substitutes of castles, pawns, etc. And such substitutions would have no important impact on the nature of the game itself.

Thus — on this Wittgensteinian reading — there are correct moves in chess; though there are no such things as true moves… Of course that’s unless we’re using the word “correct” as a literal synonym for the word “true”!

[Note: Although no one expects the individual words in a natural-language statement to have their own truth conditions — the names within such statements do have their own references and other words may have their extension. Few people have demanded that connectives, prepositions, etc. have any of these things.]

Mathematical Grammar

“Consider Professor Hardy’s article (‘Mathematical Proof’) and his remark that ‘to mathematical propositions there corresponds — in some sense, however sophisticated — a reality.’ [] We think of ‘reality’ as something we can point to. It is this, that. Professor Hardy is comparing mathematical propositions to the propositions of physics. This comparison is extremely misleading.”

Wittgenstein (Lectures on the Foundations of Mathematics)

Wittgenstein believed that mathematical statements are grammatical in nature. Thus if the grammar is in order, then the mathematics is correct. So he elaborated on the quote above with the following few words about Blaise Pascal:

“The mathematician Pascal admires the beauty of a theorem in number theory; it’s as though he were admiring a beautiful natural phenomenon. Its marvellous, he says, what wonderful properties numbers have. It’s as though he were admiring the regularities in a kind of crystal.”

So what about the following statement? -

Mathematical statements are correct because such statements are true.

That’s a possible riposte.

Wittgenstein might have simply turned that statement on its head and claimed the following:

Mathematical statements are true precisely because they’re grammatically correct.

That could be to concede that mathematical grammar is parasitical on mathematical truth. Alternatively, it might be to concede that mathematical truth is parasitical on mathematical grammar. In addition, if Wittgenstein dispensed with mathematical truth, then perhaps we can also dispense with mathematical grammar… Or at least it can be said that you can’t have one without the other.

Wittgenstein might also have accepted mathematical truth; though only when it isn’t conceived as some kind of correspondence with (or reference/relation to) abstract entities in a Platonic world.

So is it possible to make sense of mathematical truth when it’s is cashed out exclusively in terms of abiding by grammatical rules?

For a long time truth itself (regardless of mathematics) has been cashed out in many ways other than in terms of correspondence (i.e., as in the correspondence theory of truth). Thus why should mathematical truth be any different?

The question therefore is this:

Can mathematical truth be cashed out solely in terms of following grammatical rules?

What is it, then, to follow a rule?

Is it to conform to a norm and/or to a practice?

Use and Mention: “2 + 2 = 4” 2 + 2 = 4

In what follows the philosophical and/or semantic distinction between use and mention needs to be highlighted. In this case, the distinction is between the linguistic expression “2 + 2 = 4” and 2 + 2 = 4. Admittedly, sometimes it’s hard to distinguish the two (I had problems in the last section) — at least in the following context and if one takes a broadly Wittgensteinian position.

In very simple terms, the distinction can be shown when it comes to the word “cat”:

Use: “This cat is very aloof.”
Mention: “The word ‘cat’ is derived from...”

The first sentence refers to the animal called a “cat”: it uses the word “cat” to refer to that animal. The second statement is about the word “cat”.

More relevantly:

Mention: “2 = 2 = 4” — a linguistic expression
Use: 2 + 2 = 4 — an (abstract) equation

(Note that the mention example above uses numbers within a linguistic expression. To be careful, some philosophers would advise using numerals or number-words like “two” and “four” instead of the symbols “2” and “4”.)

So now take this statement (a mention):

“The statement ‘2 + 2 equals 4’ is true.”

Isn’t there a rule which tells us that if we add 2 and 2, then the result will be the number 4? That said, there could be a rule which tells us this:

“The statement ‘2 + 2 equals 5’ is true and is perfectly in order.”

That is, when 2 is doubled, an additional number must be added. However, this new rule would simply be parasitical on the rule that 2 + 2 must equal 4 because it does, after all, talk of the addition of a number to the result of doubling the number 2. The rule doesn’t, on the other hand, state “2 + 2” does equal 5: it says that a number needs to be added to the sum of 4 and 4.

So let’s try a purer formulation.

Take the statement (or even rule) that “2 + 2 equals 5” without mentioning any addition of an extra number…

A Dialogue Between a Wittgensteinian and an Opponent

A Wittgensteinian:

“Why can’t 2 plus 2 equal 5? Or, at the least, why can’t I express ‘2 plus 2 equals 5’ as a rule? Indeed you’re simply assuming that the numbers I’m using have the same properties as the numbers that you’re using. Clearly, if I use my numbers in the way that you use your numbers, then evidently my statement ‘2 + 2 = 5’ will be incorrect. However, my numbers aren’t the same as your numbers. Thus in my language game (or system) 2 + 2 does indeed equal 5.”

An anti-Wittgensteinian:

“Then you’re misusing the arithmetical concepts [addition] and [equality].”

A Wittgensteinian:

“Yes, if I use the concepts [equality] and [addition] in the same way in which you use them, then evidently my statement ‘2 + 2 = 5’ will be incorrect. But my concepts [addition] and [equality] aren’t equal to yours. In my language game (or system), they work differently.”

An anti-Wittgensteinian:

“But you’ve just contradicted yourself. You said that your concepts [addition] and [equality] aren’t ‘equal’ to my concepts [addition] and [equality]. But you’ve just used the concept [equality] in the way I use it. You’ve just said that your ‘concepts aren’t equal to’ mine. And with that I agree. Therefore it would follow that the concept [equality] you use in your mathematical system isn’t equal to your use of the concept [equality] when you use it to talk about your own mathematical ‘language game’.”

A Wittgensteinian:

“Yes, you’re right. According to one language game (i.e., mathematics) I use the concept [equality] in one way. And according to another language game (talking about mathematics or metamathematics), I use the concept — or at least the word — in another way.”

An anti-Wittgensteinian:

“If that’s the case, then how the hell can we have a proper conversation if we’re using the same concepts (actually, words) in different ways? If you can arbitrarily stipulate a meaning of a concept in any way you like, then perhaps we’re not having a genuine conversation at all. We’ll simply be talking at cross-purposes.”

A Wittgensteinian:

“No, because I know that the concept [equality] is always relative to a language game. Therefore if I know what language game the concept belongs to, then I also understand the concept. I understand your use of the word ‘equality’ because I know which language games it belongs to. Therefore I can understand you and we aren’t talking at cross-purposes. All I need to determine is the language game to which the concept or word belongs. And even in my own case, I need to be aware of how I’m using a particular concept or word. I need to know which language game I’m using when conversing with other people. And you too need to be aware of which language game the people you talk to are playing — otherwise you’ll be talking at cross purposes. You may of course believe that you don’t belong to any language game or even that your position is beyond language games. However, such a position would be a meta-language game; which would simply be another language game with highfalutin ambitions.”

(*) See my ‘When Alan Turing and Ludwig Wittgenstein Discussed the Liar Paradox’.

[I can be found on Twitter here.]