Thursday 6 October 2016

'and' and 'tonk'




'and' and Analytic Validity

In order to understand A.N Prior's use of the neologism 'tonk', we firstly need to understand the way in which he takes the connective of conjunction – namely, 'and'.

Prior makes the counterintuitive claim that “any statement whatever may be inferred, in an analytically valid way, from any other” (130). Prima facie, that raises a question: 


Does that mean that any statement with any content can be inferred from any other with any content?

The word 'and' (in this logical sense at least) is understood by seeing its use in statements or propositions.

We start off with two propositions: P and Q; which begin as separate entities in this context. Prior argues that we can “infer” P-and-Q from statements P and Q. The former symbolism, “P-and-Q” (i.e., with hyphens) signifies the conjunction; whereas “P and Q” (i.e., without hyphens) signifies two statements taken separately. However, we can infer P-and-Q from any P and Q. That is, from P on its own, and Q on its own, we can infer P-and-Q. In other words, statements P and Q can be joined together to form a compound statement.

We can now raise two questions. One, do the truth-values of both P and Q matter at this juncture? Two, do the contents of both P and Q matter at this point?

In basic logic, the answer to both questions is 'no'. It's primarily because of this that some of the counterintuitive elements of this account become apparent.

For example, Prior says that “for any pair of statements P and Q, there is always a statement R such that given P and given Q we can infer R” (129). The important word to note here is “any” (as in “for any pair of statements P and Q”). This leads to the conclusion (just mentioned) that the truth-values and/or contents of both P and Q don't matter within the logical context of defining the connective 'and'. It's partly because of this that Prior tells us that “we can infer R” from P and Q. Thus:

(P) The sun is in the solar system.
(P & Q) Therefore the sun is in the solar system and peas are green.
(R/Q) Therefore peas are green.

All those statements are true; yet they have unconnected contents and a conclusion which doesn't follow from (the content!) of the premises. Similarly with two false premises. Thus:

(P) The sun is in the bathroom.
(P & Q) Therefore the sun is in the bathroom and peas are blue.
R/Q) Therefore peas are blue.

It's because of this irrelevance of contents and truth-values that R will follow from any P and any Q.

Thus it's no surprise that Prior also says that “given R we can infer P and can also infer Q”. As in this example:

(R/P) Peas are green
(P and Q) Therefore peas are green and the sun is in the solar system.

The difference here is that a compound statement (i.e., P-and-Q) is derived from an atomic statement (i.e., R/P). (Except, in this instance, R should be P and P and Q should be R.) Nonetheless, contents and truth-values still don't matter. Another way of putting this is (as in the argument-form above) that premises and conclusions can change places without making a difference.

'tonk'

If we still have problems with Prior's 'tonk', that situation arises because we fail to see that the “meaning” of any connective “is completely given by the rules” (130).

Prior gives the following example of this logical phenomenon:

(P) 2 and 2 are 4.
(Q)Therefore, 2 and 2 are 4 tonk 2 and 2 are 5.
(R/Q)Therefore 2 and 2 are 5.

Clearly the connective 'tonk' is doing something to the proceeding 2 and 2 are 4 - but what? Could it be that 'tonk' seems to mean add 1 - at least in this instance? That would mean, however, that 'tonk' is the operation of adding 1, which isn't (really?) a connective of any kind.

The new connective 'tonk' works like the connective 'and'. Or as Prior puts it:

Its meaning is completely given by the rules that (i) from any statement P we can infer any statements formed by joining P to any statement Q by 'tonk'... and that (ii) from any 'tonktive' statement P-tonk-Q we can infer the contained statement Q.” (130)

Thus, at a symbolic level, 'tonk' works like 'and'. And just as Prior symbolised P and Q taken together as P-and-Q; so he takes P and Q taken together with tonk as P-tonk-Q.

In this case, '2 and 2 are 4' (P) is being conjoined with '2 and 2 are 5' (Q). Thus the conclusion, 'therefore, 2 and 2 are 5' (R) follows from '2 and 2 are 5' (Q), though not from '2 and 2 are 4'. In other words, R only needs to follow from either P or Q, not from both. Thus when P and Q are (as it were) tonked, we get: '2 and 2 are 4 tonk 2 and 2 are 5'. And the conclusion is: 'Therefore 2 and 2 are 5.'

To express all this in argument-form, take this example:

(P) Cats have four legs.
(P & Q) Therefore cats have four legs tonk Cats have three legs.
(R/Q) Therefore cats have three legs.

What is 'tonk' doing in the above? It seems to be cancelling out the statement before (i.e., 'Cats have four legs'). Thus if 'tonk' comes after any P in any compound statement, then Q will cancel out P. If that appears odd (especially with the seeming contradiction), that's simply because, as Prior puts it, “there is simply nothing more to knowing the meaning of ['tonk'] than being able to perform these inferences” (129). In this case, we firstly state P, and then P-tonk-Q (in which Q cancels out P), from which we conclude R.

Nuel D. Belnap helps us understand what's happening here by offering us different symbols and a different scheme. Instead of the argument-form above (which includes P, Q and R), we have the following:

i) A ⊢ A-tonk-B
ii) A-tonk-B ⊢ B
iii) A ⊢ B

Quite simply, one can deduce A-tonk-B from A. Then one can deduce B from A-tonk-B. Finally, this means that one can derive B from A.

In our example, by a simple rule of inference, one can derive 'Cats have four legs tonk cats have three legs' (A-tonk-B) from 'Cats have four legs' (A). And then one can derive 'Cat have three legs' (B) from 'Cats have four legs tonk cats have three legs' (A-tonk-B). Finally, one can derive 'Cats have three legs' (B) from 'Cats have four legs' (A).

Belnap claims that an arbitrary creation of a connective (through implicit definition) could or can result in a contradiction. Thus, the symbol '?' in the following

(a ? c) = d a + c
(b    d)        b + d.

could result in: 2 = 3
                           3    5

However, doesn't Prior's 'Therefore 2 and 2 are 4 tonk 2 and 2 are 5' also contain a contradiction? Prior seems to be stressing the point that in the definitions of connectives, even counterintuitive ones, such contradictions are to be expected. Isn't that the point? 


References

Belnap, Nuel. (1962) 'Tonk, Plonk and Plink'.
Prior, A.N. (1960) 'The Runabout Inference Ticket'.


Thursday 15 September 2016

Kenan Malik's Extended Mind




This is a commentary on the ‘Extended Mind’ chapter of Kenan Malik’s book Man, Beast and Zombie. (Read an account of this book here.)

Kenan Malik offers us a basic argument; which I’ve simplified using his own words:

i) The “human mind is structured by language”.
ii) “Language is public.”
iii) Therefore: “The mind is itself is public.”

Malik characterises “computational theory” (and quoting Hilary Putnam) as one that

“suggests that everything that is necessary for the use of language is stored in each individual mind”.

Here we must make a distinction between necessary and sufficient conditions “for the use of language”. It may indeed be the case that “everything that is necessary for the use of language is stored in each individual mind”. Yet it may also be the case that such things aren’t sufficient for the use of a language. In other words, the mechanics for language use are (as it were) internal; though what follows from that is not. And what follows from the (brain and computational) mechanics of language is, of course, the use of language itself (i.e., in “everyday contexts”).

Kenan Malik

Thus Malik’s quote from the American philosopher Hilary Putnam (1926–2016) (that “‘no actual language works like that [because] language is a form of cooperative activity, not an essentially individualistic activity’”) may not be to the point here. Indeed I find it hard to see what a non-cooperative and individualistic language would be like — even in principle. That must surely imply that Malik (if not Putnam) has mischaracterised Fodor’s position. Another way to put this is to say that Jerry Fodor (1935–2017) was as much an anti-Cartesian and Wittgensteinian as anyone else. The Language of Thought idea and “computational theory” generally aren’t entirely individualist (i.e., in the philosophy of mind sense) when we take them beyond their physical and non-conscious reality. How could they be?

There’s an analogy here between this and the relation between DNA and its phenotypes (as understood in very basic terms). Clearly DNA is required for phenotypes. However, DNA and phenotypes aren’t the same thing. In addition, environments (not only DNA) also determine the nature of the phenotypic expression.

As I hinted at earlier, Malik’s position hints at a debate which has involved Jerry Fodor, Hilary Putnam and Noam Chomsky.

Malik rejects Fodor’s internalism (or individualism); as already stated. It was said that Fodor believed that something must predate language use. So let Fodor explain his own position. Thus:

“My view is that you can’t learn a language unless you already know one.”

Fodor means something very specific by the clause “unless you already know one”. As he put it:

“It isn’t that you can’t learn a language unless you’ve already learned one. The latter claim leads to infinite regress, but the former doesn’t.”

In other words, the Language of Thought isn’t learned. It’s genetically passed on from previous generations. It’s built into the brains of new-born Homo sapien babies.

Hilary Putnam gives a more technical exposition of Fodor’s position. He wrote:

“[Fodor] contends that such a computer, if it ‘learns’ at all, must have an innate ‘programme’ for making generalisations in its built-in computer language.”

Secondly, Putnam tackled Fodor’s rationalist — or even platonic — position in which he argued for innate concepts. Putnam continued:

“[Fodor] concludes that every predicate that a brain could learn to use must have a translation into the computer language of that brain. So no ‘new’ concepts can be acquired: all concepts are innate.”

Meanings Ain’t in the Head

Because Malik argues that references to natural phenomena are an externalist affair (as well as sometimes scientific), it may follow that non-scientific individuals may not know the full meanings of the words, meanings or concepts they use. As Putnam famously put it: “Meaning just ain’t in the head.”

Malik gives the example of the words “ash” and “elm”. Ash and elm trees are natural phenomena. In addition, their nature is determined — — and perhaps defined — by their scientific nature. In other words, the reference-relation isn’t determined by the appearances of elm and ash trees. This results in a seemingly counterintuitive conclusion. Malik writes:

“Many Westerners have a distinct representation of ‘ash’ and ‘elm’ in their heads, but they have no idea how to distinguish ash and elm in the real world.”

I said earlier that references to ash and elm trees can’t be fully determined by appearances. However, they can be fully distinguished by appearances. But that distinction wouldn’t be enough to determine a reference-relation. The scientific nature of ash and elm trees must also be taken into account. Thus when it comes to the reference-relation to what philosophers call “natural kinds” and other natural phenomena, the

“knowledge of gardeners, botanists, of molecular biologists, and so on, all play a crucial role in helping me refer to [in this instance] a rose, even though I do not possess their knowledge”.

Malik backs up his anti-individualistic theory of language and mind by offering an account of reference which owes much to Kripke and Putnam — certainly to Putnam.

Prima facie, it may seem that reference is at least partly individualistic (or internalist). That is, what determines our words is some kind of relation between it (as it is the mind), and that which it refers to (or represents). This means that reference isn’t only a matter of the individual mind and the object-of-reference.

Malik, instead, offers what can be seen as a scientific account of reference.

Take his example of the (as he puts it) “mental representation” of DNA. (Does Malik mean word word/initials “DNA” here?) The reference-relation between “DNA” and DNA isn’t only a question of what goes on in a mind (or in collective minds). Indeed

“your mental representation of DNA (or mine) is insufficient to ‘hook on to’ DNA as an object in the world”.

There’s not enough (as it were) meat to make a sufficient reference-relation between “DNA” and DNA in individual minds alone. Instead the scientific nature of DNA determines reference for all of us — even if we don’t know the science.

Malik quotes Putnam again here.

The reference for the word/initials “DNA” is

“socially fixed and not determined simply by conditions of the brain of an individual”.

Of course something that’s scientifically fixed is also “socially fixed”. DNA may be a natural phenomenon; though the fixing of the reference between the word “DNA” to DNA is a social and scientific matter.

References

Fodor, Jerry. (1975) ‘How There Could Be a Private Language and What It Must Be Like’, in (1992) The Philosophy of Mind: Classical Problems, Contemporary Issues.

Putnam, Hilary. (1980) ‘ What Is Innate and Why: Comments on the Debate’, in (1992) The Philosophy of Mind: Classical Problems, Contemporary Issues.

[I can be found on Twitter here.]

Wednesday 20 April 2016

Scraps of Kant (1)



(i) The Unexperienced Soul 
(ii) The Unperceived Tree 
(iii) The Antinomies and Experience 
(iv) Freedom and Causal Necessity 
(v) Knowledge of Things-in-Themselves

The Unexperienced Soul

Immanuel Kant (1724–1804) was quite at one with David Hume (1711–1776) in that he too believed that we never actually any experience of the self. (In Kant’s own terms, the “soul”, or the “substance of our thinking being”.) This is because the self is the means through which we experience. Thus, it can’t also be an object of experience.

Of course, we can experience the (to use Kant’s own term) “cognitions” of the soul. However, we can’t experience the soul which has (or carries out) the cognitions. Like all other substances, the “substantial itself remains unknown”. We can, however, be aware of “the actuality of the soul” through the “appearances of the internal sense”. This is part of Kant’s defence (though not proof) of the soul.

Again, none of this has anything to do with any actual experience of the soul.

The Unperceived Tree

Immanuel Kant’s following argument is is very much like Bishop (George) Berkeley’s.

According to Kant, when we imagine a unperceived tree, we are, in fact, imagining it as it is perceived — even if supposedly perceived by some kind of disembodied mind. As Kant put it, in such cases we represent “to ourselves that experience actually exists apart from experience or prior to it”. Thus, when we imagine the objects of the senses existing in a ostensibly “self-subsisting” manner, we are, in fact, imagining them as they would be as they are experienced. That isn’t surprising because there’s no other way to imagine the objects of the senses.

Thus, Kant argued that we are not imagining things-in-themselves.

Space and time, on the other hand, are “modes” through which we represent the external objects of the senses. As Bertrand Russell put Kant’s position, we wear spatial and temporal glasses through which we perceive the world. Thus, if we take the glasses off, then, quite simply, space and time would simply disappear. They have no actuality apart from our minds.

Appearances must be given up to us in the containers we call space and time. Space and time are the vehicles of our experiences of the objects of the senses. In a sense, then, it seems like a truism to say that “objects of the senses therefore exist only in experience”. That’s because there are few experiences without the senses, and our senses themselves help determine those experiences.

The Antinomies and Experience

What are the antinomies?

They’re two opposing positions of philosophical dispute which have (to use Kant’s own words) “equally clear, evident, and irresistible proof”. Stated in another way: a proposition and its negation are both equally believable (or acceptable) in terms of rational inquiry.

Kant gives an example of such an argument with two equally weighty sides. One is whether or not the world had a beginning, or whether it has existed for eternity. The other is whether or not “matter is infinitely divisible or consists of simple parts”.

What unites these arguments is that none of them can be solved with the help of experience. In one sense, then, this is an argument about the limits of empiricism. That said, according to the empiricism of, say, the logical positivists, these antimonies were considered to be “meaningless” precisely because they can’t be settled (or solved) by consulting experience.

As Kant himself put it, such “concepts cannot be given in any experience”. To Kant, it followed that such issues (or problems) are transcendent to us.

Kant went into further detail about experience-transcendent (or evidence-transcendent) facts or possibilities.

Through experience, we can’t know whether or not the “world” (i.e., the universe) is infinite or finite in magnitude. Similarly, infinite time can’t “be contained in experience”. Kant also wrote about the intelligibility of talking about space beyond universal space, or time before universal time. If there were a time before time, then it wouldn’t actually be a time before time because time — according to Kant — is continuous. And if there were a space beyond universal space, then it wouldn’t be beyond universal space because there can be no space beyond space itself.

Kant also questioned the validity of the notion of “empty time”. That is, time without space, and time without objects in space. Kant, instead, argued that time, space and objects-in-space are all interconnected.

So perhaps Kant believed (like Leibniz before him) that time wouldn’t pass without objects to (as it were) measure the “movement” (through disintegration and growth) of time. Similarly, on Kant’s cosmology, space without time would be nonsensical.

Freedom and Causal Necessity

“[I]f natural necessity is referred merely to appearances and freedom merely to things in themselves [].”

This position (again) unites Kant with Hume, who also believed that necessity is something that we (as it were) impose on the world.

Necessity only belongs to appearances, not to things-in-themselves. This can also be deemed a forerunner of the logical positivist idea that necessity is a result (or product) of our conventions, not of the world itself. Of course, just as conventions belong to minds, so too do appearances.

In Kant’s view, freedom (i.e., independence from causal necessity) is only found in things-in-themselves. That is, the substance of the mind is also a thing-in-itself. Therefore, the mind too is free from causal necessitation. The only things that are subject to causal necessitation are the objects of experience. Things-in-themselves are free.

Thus, Kant believed that he managed to solve a very difficult problem: the problem of determinism. In Kant’s picture, “nature and freedom” can exist together. Nature is not free. However, things-in-themselves (including the mind’s substance) are free. These different things can “be attributed to the very same thing”. That is, human beings are beings of experience, and also beings-in-themselves. The experiential side of human nature is therefore subject to causal laws, whereas the mind (or “soul”) transcends causal necessitation. We are, therefore, partly free, and partly unfree.

Kant has a particular way of expressing what he calls “the causality of reason”. Because reason is free, its cognitions and acts of will can be seen as examples of “first beginnings”. A single cognition or act of will is a “first cause”. In other words, it’s not one of the links in a causal chain. If it were a link in such a possibly infinite causal chain, then there would be no true freedom.

First beginnings guarantee persons freedom of the will and self-generated (or self-caused) cognitions. (In contemporary literature, such first beginnings are called “originations”.)

What does it mean to say that something just happens ex nihilo?

Would such originations therefore be arbitrary or even chaotic — sudden jolts in the dark of our minds? Wouldn’t they also be like the quantum fluctuations in which particles suddenly appear out of the void?

Why would such things guarantee us freedom, rather than making us the victims of chance or randomness?

Knowledge of Things-in-Themselves

Kant argued that we can’t know anything about things-in-themselves (in the singular German, Ding an sich), yet he also argued that “we are not at liberty to abstain entirely from inquiring into them”.

So can we have knowledge of things-in-themselves or not?

Perhaps Kant meant that although we can indeed inquire into things-in-themselves, nevertheless it will be a fruitless endeavour. Or perhaps Kant’s point was psychological: we have a psychological need to inquire because “experience never satisfies reason fully”. Alternatively, although our inquiries into things-in-themselves won’t give us knowledge, we can still offer conjectures or suppositions about them. That is, we can speculate about the true nature of things-in-themselves, even though we’ll never have knowledge (in the strict sense) of them. [Schopenhauer criticised this stance. See here.]

In Kant’s scheme, then, there are questions that will press upon us despite the fact that answers to them may never be forthcoming. Kant, again, cites his earlier examples of evidence- or experience-transcendent issues such as “the duration and magnitude of the world, of freedom or of natural necessity”. However, experience (alone?) lets us down on these issues. Reason, on the other hand, shows us “the insufficiency of all physical modes of explanation”.

Can reason truly offer us more?

Again, Kant tells us that we can’t be satisfied by the appearances. The

chain of appearances [] has [] no subsistence by itself [] and consequently must point to that which contains the basis of these appearances”.

Of course, it’s reason itself which will “hope to satisfy its desire for completeness”. However, reason can’t satisfy our yearnings by giving us knowledge of things-in-themselves. So “we can never cognise these beings of understanding”, but “must assume them”. However, it is reason that “connects them” with the sensible world (and vice versa). It must follow, therefore, that although “we can never cognise these beings of understanding”, there must be some alternative way (or ways) of understanding them.


*) Many of the quoted passages above are taken from Kant’s Prolegomena to Any Future Metaphysics.





Saturday 19 March 2016

John Heil on the Mind-Body Problem



John Heil | Proceedings of the Aristotelian Society | Philosophy in London Since 1880

Colin McGinn is known for arguing that the problem of consciousness may well be insoluble in principle. He once wrote the following:

It could turn out that the human mind is constitutionally unable to understand itself.” [756]


We can ask how McGinn (or anyone else for that matter) could know that. Perhaps he's arguing that it could be the case that we are constitutionally incapable of understanding mind and consciousness: not that we actually are.


Take this case. If mind-brains are "formal systems", it may be the case that they couldn't have complete knowledge of themselves. John Heil writes:


Gödel showed us that formal systems rich enough to generate the truths of elementary arithmetic were, if consistent, in principle incomplete. (A system is incomplete if there are truths expressible in and implied by the system that cannot be proven true in the system.) The incompleteness of mathematics reflects an established fact about the make-up of formal systems generally. Now, imagine that we finite human beings are, as we surely are, constitutionally limited as to the kinds of thought we could entertain. Imagine, further, that our cognitive limitations were such that we could not so much as entertain the deep truth about our own minds.” [756]


Intuitively, the idea that we're constitutionally and cognitively limited in many - or some - ways is easy to accept. And if we accept this, then McGinn’s arguments seem acceptable, if not palatable. But are mind-brains formal systems? Is it right to compare the mind to an arithmetical formal system, as Heil does? (The argument is similar to the ‘what-is-it-like-to-be-a-bat/an x’ argument. In that case, we're constitutionally unable to imagine what it is like to be a bat (e.g., to have its sonar abilities).)

It is indeed quite wrong to simply assume that the "deep truth" (or truths) of mind will some day be available to us (as many scientists may imagine). Heil writes:


“Indeed, we should be hard put to establish in advance that the deep truth about anything at all – including the material world – is cognitively available to us. To think that it must be is to exhibit an unwarranted degree of confidence in our finite capacities, what the ancients called hubris.” [756]


Doesn’t the (C.S.) Peircian notion of the “scientific convergence on the truth” assume that, at some point in the future, we will know everything about both the mind and the world? Don’t many scientists daily display such an example of scientific hubris? However, on the other side of the argument, deep pessimism may also be unwarranted. Heil writes:

“… we cannot positively prove that we are cut off from a deep understanding of mental phenomena.” [756]


Just as many scientists and philosophers display positivism on this issue, so many philosophers (such as McGinn, Nagel, etc.) display a deep pessimism; which is often disguised under the clothing of modesty or humility. Perhaps the problem of consciousness isn't one of insolubilia; but one of incompletability.


Reference