Monday, 10 October 2016

Material Logic vs. Formal Logic?




In a purely logical argument, even if the premises aren’t in any way (semantically) connected to the conclusion, the argument may still be both valid and sound.

Professor Edwin D. Mares displays what he sees as a problem with purely formal logic when he offers us the following example of a valid argument:

The sky is blue.

-------------------------------------------------------------------------------------------------------

∴ there is no integer n greater than or equal to 3 such that for any non-zero integers xyzxn = yn + zn.

Edwin Mares says that the above “is valid, in fact sound, on the classical logician’s definition”. It’s the argument that is valid; whereas the premise and conclusion are sound (i.e., true). In more detail, the

“premise cannot be true in any possible circumstance in which the conclusion is false”.

Clearly the content of the premise isn’t semantically — or otherwise — connected to the content of the conclusion. However, the argument is still valid and sound.

That said, it’s not clear from Edwin Mares’ symbolic expression above if he meant this: “If P, therefore QP. Therefore Q.” That is, perhaps the premise “The sky is blue” with a line under it, followed by the mathematical statement, is used as symbolic shorthand for an example of modus ponens which doesn’t have a sematic connection between P and Q. In other words, Mares’ “P, therefore Q” isn’t (really) an argument at all. However, if both P and Q are true, then, logically, they can exist together without any semantic connection and without needing to be read as shorthand for an example of modus ponens.

Whatever the case is, what’s the point of the “The Sky is blue” example above?

Perhaps no logician would state it for real. He would only do so, as Mares himself does, to prove a point about logical validity. However, can’t we now ask why it’s valid even though the premise and conclusion are true?

Perhaps showing the bare bones of the “The sky is blue” example will help. Thus:

P
∴ 
Q

Does that look any better? Even though we aren’t given any semantic content, both P and Q must have a truth-value. (In this case, both P and Q are true.) It is saying: P is true. Therefore Q is true. The above isn’t saying: Q is a consequence of P. (Or: P entails Q.) Basically, we’re being told that two true and unrelated statements can (as it were) exist together — as long as they don’t contradict each another. (Or on the aforementioned alternative reading: “If P is true; then Q is true. P is true. Therefore Q is true.”)

So there are cases in which the premises of an argument are all true, and the conclusion is also true; and yet as Professor Stephen Read puts it:

“[T]here is an obvious sense in which the truth of the premises does not guarantee that of the conclusion.”

Ordinarily the truth of the premises is meant to “guarantee” the truth of the conclusion. So let’s look at Read’s own example:

i) All cats are animals
ii) Some animals have tails
iii) Therefore some cats have tails.

Clearly, premises i) and ii) are true. Indeed iii) is also true. (Not all cats have tails. And, indeed, according to some logicians, “some” also implies “all”.)

So why is the argument above invalid?

It’s invalid not because of the assigned truth-values of the premises and the conclusion; but for another reason. The reason is that the sets used in the argument are (as it were) mixed up. Thus we have the distinct sets [animals], [cats] and [animals which have tails].

It doesn’t logically follow from “some animals have tails” that “some cats have tails”. If some animals have tails it might have been the case that cats are animals which don’t have tails. Thus iii) doesn’t necessarily follow from ii). (iii) doesn’t follow from i) either.) ii) can be taken as an existential quantification over animals. iii), on the other hand, is an existential quantification over cats. Thus:

ii) ((Ǝx) (Ax)
iii) (Ǝ
x) (Cx))

Clearly, Ax and Cx are quantifications over different sets. It doesn’t follow, then, that what’s true of animals is also generally true of cats; even though cats are members of the set [animals]. Thus iii) doesn’t follow from ii).

To repeat: even though the premises and the conclusion are all true, the above still isn’t a valid argument. Read himself helps to show this by displaying an argument-form with mutually-exclusive sets — namely, [cats] and [dogs]. Thus:

i) All cats are animals
ii) Some animals are dogs
iii) Therefore some cats are dogs.

This time, however, the conclusion is false; whereas i) and ii) are true. It’s the case that the subset [dogs] belongs to the set [animals]. Some animals are indeed dogs. However, because some animals are dogs, it doesn’t follow that “some cats are dogs”. In other words, because dogs are members of the set [animals], that doesn’t mean that they’re also members of the subclass [cats] simply because cats themselves are also members of the set [animals]. Cats and dogs share animalhood; though they’re different subsets of the set [animal]. In other words, what’s true of dogs isn’t automatically true of cats.

The importance of sets, and their relation to subsets, may be expressed in terms of brackets. Thus:

[animals [[cats [[[cats with tails]]]]
not-[animals [[cats [[[dogs]]]]

Material Validity and Formal Validity

Stephen Read makes a distinction between formal validity and material validity. He does so by using this example:

i) Iain is a bachelor
ii) So Iain in unmarried.

(One doesn’t usually find an argument with only a single premise.)

The above is materially valid because there’s enough semantic material in i) to make the conclusion acceptable. After all, if x is a bachelor, he must also be unmarried. Despite that, it’s still formally invalid because there isn’t enough content in the premise to bring about the conclusion. That is, one can only move from i) to ii) if one already knows that all bachelors are unmarried. We either recognise the shared semantic content or we know that the term “unmarried man” is a synonym of “bachelor”. Thus we have to add semantic content to i) in order to get ii). And it’s because of this that the overall argument is said to be formally invalid. Nonetheless, because of what’s already been said, it is indeed still materially valid.

The material validity of the above can also be shown by its inversion:

i) Iain is unmarried
ii) So Iain is a bachelor.

Read makes a distinction by saying that its

“validity depends not on any form it exhibits, but on the content of certain expressions in it”.

Thus, in terms of logical form, it’s invalid. In terms of content (or the expressions used), it’s valid. This means that the following wouldn’t work as either a materially or a formally valid argument:

i) Iain is a bachelor.
ii) So Iain is a footballer.

There’s no semantic content in the word “bachelor” that can be directly tied to the content of the word “footballer”. Iain may well be a footballer; though the necessary consequence of him being a footballer doesn’t follow from his being a bachelor. As it is, the conclusion is false even though the premise is true.

Another way of explaining the material (i.e., not formal) validity of the argument above is in terms of what logicians call a suppressed premise (or a hidden premise). This is more explicit than talk of synonyms or shared content. In this case, what the suppressed premise does is show the semantic connection between i) and ii). The actual suppressed premise for the above is the following:

All bachelors are unmarried.

Thus we should actually have the following argument:

i) Iain is a bachelor.
ii) All bachelors are unmarried.
iii) Therefore Iain is unmarried.

It may now be seen more clearly that

i) Iain is unmarried.
ii) So Iain is a bachelor.

doesn’t work formally; though it does work materially.

What about this? -

i) All bachelors are unmarried.
ii) So Iain is unmarried.

To state the obvious, this is clearly a bad argument. (It’s called an enthymeme.) Indeed it can’t really be said to be an argument at all. Nonetheless, this too can be seen to have a suppressed (or hidden) premise. Thus:

i) All bachelors are unmarried.
ii) [Suppressed premise: Iain is a bachelor.]
iii) So Iain is unmarried.

Now let’s take the classic case of modus ponens:

A, if A then B / Therefore B

That means:

A, if A is the case, then B is the case. A is the case. Therefore B must also be the case.

The obvious question here is: What connects A to B (or B to A)? In terms of this debate, is the connection material or formal? Clearly, if the content of both A and B isn’t given, then it’s impossible to answer this question.

We can treat the example of modus ponens above as having the aforesaid suppressed premise. Thus:

i) [Suppressed premise: Britain’s leading politician is the Prime Minister.]
ii) Boris Johnson is Britain’s leading politician.
iii) Therefore Boris Johnson is Britain’s Prime Minister.

In this instance, premises and conclusion are true. Yet i) is only contingently (i.e., not necessarily) connected to ii) and iii).

Finally, Stephen Read puts the formalist position on logic very clearly when he states the following:

“Logic is now seen — now redefined — as the study of formal consequence, those validities resulting not from the matter and content of the constituent expressions, but from the formal structure.”

We can now ask:

What is the point of a logic without material or semantic content?

If logic were purely formal, then wouldn’t all the premise and predicate symbols — not the logical symbols — simply be autonyms? (That is, all the p’s, q’s, x’s, F’s, G’s etc. would be purely self-referential.) So what would be left of logic if that were the case? Clearly we could no longer say that logic is about argumentation — or could we? Not really. The fact is that we can still learn about argumentation from schemas (or argument-forms) which are purely formal in nature. And that basically means that the dots don’t always — or necessarily — need to be filled in.


Thursday, 6 October 2016

'and' and 'tonk'




'and' and Analytic Validity

In order to understand A.N Prior's use of the neologism 'tonk', we firstly need to understand the way in which he takes the connective of conjunction – namely, 'and'.

Prior makes the counterintuitive claim that “any statement whatever may be inferred, in an analytically valid way, from any other” (130). Prima facie, that raises a question: 


Does that mean that any statement with any content can be inferred from any other with any content?

The word 'and' (in this logical sense at least) is understood by seeing its use in statements or propositions.

We start off with two propositions: P and Q; which begin as separate entities in this context. Prior argues that we can “infer” P-and-Q from statements P and Q. The former symbolism, “P-and-Q” (i.e., with hyphens) signifies the conjunction; whereas “P and Q” (i.e., without hyphens) signifies two statements taken separately. However, we can infer P-and-Q from any P and Q. That is, from P on its own, and Q on its own, we can infer P-and-Q. In other words, statements P and Q can be joined together to form a compound statement.

We can now raise two questions. One, do the truth-values of both P and Q matter at this juncture? Two, do the contents of both P and Q matter at this point?

In basic logic, the answer to both questions is 'no'. It's primarily because of this that some of the counterintuitive elements of this account become apparent.

For example, Prior says that “for any pair of statements P and Q, there is always a statement R such that given P and given Q we can infer R” (129). The important word to note here is “any” (as in “for any pair of statements P and Q”). This leads to the conclusion (just mentioned) that the truth-values and/or contents of both P and Q don't matter within the logical context of defining the connective 'and'. It's partly because of this that Prior tells us that “we can infer R” from P and Q. Thus:

(P) The sun is in the solar system.
(P & Q) Therefore the sun is in the solar system and peas are green.
(R/Q) Therefore peas are green.

All those statements are true; yet they have unconnected contents and a conclusion which doesn't follow from (the content!) of the premises. Similarly with two false premises. Thus:

(P) The sun is in the bathroom.
(P & Q) Therefore the sun is in the bathroom and peas are blue.
R/Q) Therefore peas are blue.

It's because of this irrelevance of contents and truth-values that R will follow from any P and any Q.

Thus it's no surprise that Prior also says that “given R we can infer P and can also infer Q”. As in this example:

(R/P) Peas are green
(P and Q) Therefore peas are green and the sun is in the solar system.

The difference here is that a compound statement (i.e., P-and-Q) is derived from an atomic statement (i.e., R/P). (Except, in this instance, R should be P and P and Q should be R.) Nonetheless, contents and truth-values still don't matter. Another way of putting this is (as in the argument-form above) that premises and conclusions can change places without making a difference.

'tonk'

If we still have problems with Prior's 'tonk', that situation arises because we fail to see that the “meaning” of any connective “is completely given by the rules” (130).

Prior gives the following example of this logical phenomenon:

(P) 2 and 2 are 4.
(Q)Therefore, 2 and 2 are 4 tonk 2 and 2 are 5.
(R/Q)Therefore 2 and 2 are 5.

Clearly the connective 'tonk' is doing something to the proceeding 2 and 2 are 4 - but what? Could it be that 'tonk' seems to mean add 1 - at least in this instance? That would mean, however, that 'tonk' is the operation of adding 1, which isn't (really?) a connective of any kind.

The new connective 'tonk' works like the connective 'and'. Or as Prior puts it:

Its meaning is completely given by the rules that (i) from any statement P we can infer any statements formed by joining P to any statement Q by 'tonk'... and that (ii) from any 'tonktive' statement P-tonk-Q we can infer the contained statement Q.” (130)

Thus, at a symbolic level, 'tonk' works like 'and'. And just as Prior symbolised P and Q taken together as P-and-Q; so he takes P and Q taken together with tonk as P-tonk-Q.

In this case, '2 and 2 are 4' (P) is being conjoined with '2 and 2 are 5' (Q). Thus the conclusion, 'therefore, 2 and 2 are 5' (R) follows from '2 and 2 are 5' (Q), though not from '2 and 2 are 4'. In other words, R only needs to follow from either P or Q, not from both. Thus when P and Q are (as it were) tonked, we get: '2 and 2 are 4 tonk 2 and 2 are 5'. And the conclusion is: 'Therefore 2 and 2 are 5.'

To express all this in argument-form, take this example:

(P) Cats have four legs.
(P & Q) Therefore cats have four legs tonk Cats have three legs.
(R/Q) Therefore cats have three legs.

What is 'tonk' doing in the above? It seems to be cancelling out the statement before (i.e., 'Cats have four legs'). Thus if 'tonk' comes after any P in any compound statement, then Q will cancel out P. If that appears odd (especially with the seeming contradiction), that's simply because, as Prior puts it, “there is simply nothing more to knowing the meaning of ['tonk'] than being able to perform these inferences” (129). In this case, we firstly state P, and then P-tonk-Q (in which Q cancels out P), from which we conclude R.

Nuel D. Belnap helps us understand what's happening here by offering us different symbols and a different scheme. Instead of the argument-form above (which includes P, Q and R), we have the following:

i) A ⊢ A-tonk-B
ii) A-tonk-B ⊢ B
iii) A ⊢ B

Quite simply, one can deduce A-tonk-B from A. Then one can deduce B from A-tonk-B. Finally, this means that one can derive B from A.

In our example, by a simple rule of inference, one can derive 'Cats have four legs tonk cats have three legs' (A-tonk-B) from 'Cats have four legs' (A). And then one can derive 'Cat have three legs' (B) from 'Cats have four legs tonk cats have three legs' (A-tonk-B). Finally, one can derive 'Cats have three legs' (B) from 'Cats have four legs' (A).

Belnap claims that an arbitrary creation of a connective (through implicit definition) could or can result in a contradiction. Thus, the symbol '?' in the following

(a ? c) = d a + c
(b    d)        b + d.

could result in: 2 = 3
                           3    5

However, doesn't Prior's 'Therefore 2 and 2 are 4 tonk 2 and 2 are 5' also contain a contradiction? Prior seems to be stressing the point that in the definitions of connectives, even counterintuitive ones, such contradictions are to be expected. Isn't that the point? 


References

Belnap, Nuel. (1962) 'Tonk, Plonk and Plink'.
Prior, A.N. (1960) 'The Runabout Inference Ticket'.


Thursday, 15 September 2016

Kenan Malik's Extended Mind




This is a commentary on the ‘Extended Mind’ chapter of Kenan Malik’s book Man, Beast and Zombie. (Read an account of this book here.)

Kenan Malik offers us a basic argument; which I’ve simplified using his own words:

i) The “human mind is structured by language”.
ii) “Language is public.”
iii) Therefore: “The mind is itself is public.”

Malik characterises “computational theory” (and quoting Hilary Putnam) as one that

“suggests that everything that is necessary for the use of language is stored in each individual mind”.

Here we must make a distinction between necessary and sufficient conditions “for the use of language”. It may indeed be the case that “everything that is necessary for the use of language is stored in each individual mind”. Yet it may also be the case that such things aren’t sufficient for the use of a language. In other words, the mechanics for language use are (as it were) internal; though what follows from that is not. And what follows from the (brain and computational) mechanics of language is, of course, the use of language itself (i.e., in “everyday contexts”).

Kenan Malik

Thus Malik’s quote from the American philosopher Hilary Putnam (1926–2016) (that “‘no actual language works like that [because] language is a form of cooperative activity, not an essentially individualistic activity’”) may not be to the point here. Indeed I find it hard to see what a non-cooperative and individualistic language would be like — even in principle. That must surely imply that Malik (if not Putnam) has mischaracterised Fodor’s position. Another way to put this is to say that Jerry Fodor (1935–2017) was as much an anti-Cartesian and Wittgensteinian as anyone else. The Language of Thought idea and “computational theory” generally aren’t entirely individualist (i.e., in the philosophy of mind sense) when we take them beyond their physical and non-conscious reality. How could they be?

There’s an analogy here between this and the relation between DNA and its phenotypes (as understood in very basic terms). Clearly DNA is required for phenotypes. However, DNA and phenotypes aren’t the same thing. In addition, environments (not only DNA) also determine the nature of the phenotypic expression.

As I hinted at earlier, Malik’s position hints at a debate which has involved Jerry Fodor, Hilary Putnam and Noam Chomsky.

Malik rejects Fodor’s internalism (or individualism); as already stated. It was said that Fodor believed that something must predate language use. So let Fodor explain his own position. Thus:

“My view is that you can’t learn a language unless you already know one.”

Fodor means something very specific by the clause “unless you already know one”. As he put it:

“It isn’t that you can’t learn a language unless you’ve already learned one. The latter claim leads to infinite regress, but the former doesn’t.”

In other words, the Language of Thought isn’t learned. It’s genetically passed on from previous generations. It’s built into the brains of new-born Homo sapien babies.

Hilary Putnam gives a more technical exposition of Fodor’s position. He wrote:

“[Fodor] contends that such a computer, if it ‘learns’ at all, must have an innate ‘programme’ for making generalisations in its built-in computer language.”

Secondly, Putnam tackled Fodor’s rationalist — or even platonic — position in which he argued for innate concepts. Putnam continued:

“[Fodor] concludes that every predicate that a brain could learn to use must have a translation into the computer language of that brain. So no ‘new’ concepts can be acquired: all concepts are innate.”

Meanings Ain’t in the Head

Because Malik argues that references to natural phenomena are an externalist affair (as well as sometimes scientific), it may follow that non-scientific individuals may not know the full meanings of the words, meanings or concepts they use. As Putnam famously put it: “Meaning just ain’t in the head.”

Malik gives the example of the words “ash” and “elm”. Ash and elm trees are natural phenomena. In addition, their nature is determined — — and perhaps defined — by their scientific nature. In other words, the reference-relation isn’t determined by the appearances of elm and ash trees. This results in a seemingly counterintuitive conclusion. Malik writes:

“Many Westerners have a distinct representation of ‘ash’ and ‘elm’ in their heads, but they have no idea how to distinguish ash and elm in the real world.”

I said earlier that references to ash and elm trees can’t be fully determined by appearances. However, they can be fully distinguished by appearances. But that distinction wouldn’t be enough to determine a reference-relation. The scientific nature of ash and elm trees must also be taken into account. Thus when it comes to the reference-relation to what philosophers call “natural kinds” and other natural phenomena, the

“knowledge of gardeners, botanists, of molecular biologists, and so on, all play a crucial role in helping me refer to [in this instance] a rose, even though I do not possess their knowledge”.

Malik backs up his anti-individualistic theory of language and mind by offering an account of reference which owes much to Kripke and Putnam — certainly to Putnam.

Prima facie, it may seem that reference is at least partly individualistic (or internalist). That is, what determines our words is some kind of relation between it (as it is the mind), and that which it refers to (or represents). This means that reference isn’t only a matter of the individual mind and the object-of-reference.

Malik, instead, offers what can be seen as a scientific account of reference.

Take his example of the (as he puts it) “mental representation” of DNA. (Does Malik mean word word/initials “DNA” here?) The reference-relation between “DNA” and DNA isn’t only a question of what goes on in a mind (or in collective minds). Indeed

“your mental representation of DNA (or mine) is insufficient to ‘hook on to’ DNA as an object in the world”.

There’s not enough (as it were) meat to make a sufficient reference-relation between “DNA” and DNA in individual minds alone. Instead the scientific nature of DNA determines reference for all of us — even if we don’t know the science.

Malik quotes Putnam again here.

The reference for the word/initials “DNA” is

“socially fixed and not determined simply by conditions of the brain of an individual”.

Of course something that’s scientifically fixed is also “socially fixed”. DNA may be a natural phenomenon; though the fixing of the reference between the word “DNA” to DNA is a social and scientific matter.

References

Fodor, Jerry. (1975) ‘How There Could Be a Private Language and What It Must Be Like’, in (1992) The Philosophy of Mind: Classical Problems, Contemporary Issues.

Putnam, Hilary. (1980) ‘ What Is Innate and Why: Comments on the Debate’, in (1992) The Philosophy of Mind: Classical Problems, Contemporary Issues.

[I can be found on Twitter here.]

Wednesday, 20 April 2016

Scraps of Kant (1)



(i) The Unexperienced Soul 
(ii) The Unperceived Tree 
(iii) The Antinomies and Experience 
(iv) Freedom and Causal Necessity 
(v) Knowledge of Things-in-Themselves

The Unexperienced Soul

Immanuel Kant (1724–1804) was quite at one with David Hume (1711–1776) in that he too believed that we never actually any experience of the self. (In Kant’s own terms, the “soul”, or the “substance of our thinking being”.) This is because the self is the means through which we experience. Thus, it can’t also be an object of experience.

Of course, we can experience the (to use Kant’s own term) “cognitions” of the soul. However, we can’t experience the soul which has (or carries out) the cognitions. Like all other substances, the “substantial itself remains unknown”. We can, however, be aware of “the actuality of the soul” through the “appearances of the internal sense”. This is part of Kant’s defence (though not proof) of the soul.

Again, none of this has anything to do with any actual experience of the soul.

The Unperceived Tree

Immanuel Kant’s following argument is is very much like Bishop (George) Berkeley’s.

According to Kant, when we imagine a unperceived tree, we are, in fact, imagining it as it is perceived — even if supposedly perceived by some kind of disembodied mind. As Kant put it, in such cases we represent “to ourselves that experience actually exists apart from experience or prior to it”. Thus, when we imagine the objects of the senses existing in a ostensibly “self-subsisting” manner, we are, in fact, imagining them as they would be as they are experienced. That isn’t surprising because there’s no other way to imagine the objects of the senses.

Thus, Kant argued that we are not imagining things-in-themselves.

Space and time, on the other hand, are “modes” through which we represent the external objects of the senses. As Bertrand Russell put Kant’s position, we wear spatial and temporal glasses through which we perceive the world. Thus, if we take the glasses off, then, quite simply, space and time would simply disappear. They have no actuality apart from our minds.

Appearances must be given up to us in the containers we call space and time. Space and time are the vehicles of our experiences of the objects of the senses. In a sense, then, it seems like a truism to say that “objects of the senses therefore exist only in experience”. That’s because there are few experiences without the senses, and our senses themselves help determine those experiences.

The Antinomies and Experience

What are the antinomies?

They’re two opposing positions of philosophical dispute which have (to use Kant’s own words) “equally clear, evident, and irresistible proof”. Stated in another way: a proposition and its negation are both equally believable (or acceptable) in terms of rational inquiry.

Kant gives an example of such an argument with two equally weighty sides. One is whether or not the world had a beginning, or whether it has existed for eternity. The other is whether or not “matter is infinitely divisible or consists of simple parts”.

What unites these arguments is that none of them can be solved with the help of experience. In one sense, then, this is an argument about the limits of empiricism. That said, according to the empiricism of, say, the logical positivists, these antimonies were considered to be “meaningless” precisely because they can’t be settled (or solved) by consulting experience.

As Kant himself put it, such “concepts cannot be given in any experience”. To Kant, it followed that such issues (or problems) are transcendent to us.

Kant went into further detail about experience-transcendent (or evidence-transcendent) facts or possibilities.

Through experience, we can’t know whether or not the “world” (i.e., the universe) is infinite or finite in magnitude. Similarly, infinite time can’t “be contained in experience”. Kant also wrote about the intelligibility of talking about space beyond universal space, or time before universal time. If there were a time before time, then it wouldn’t actually be a time before time because time — according to Kant — is continuous. And if there were a space beyond universal space, then it wouldn’t be beyond universal space because there can be no space beyond space itself.

Kant also questioned the validity of the notion of “empty time”. That is, time without space, and time without objects in space. Kant, instead, argued that time, space and objects-in-space are all interconnected.

So perhaps Kant believed (like Leibniz before him) that time wouldn’t pass without objects to (as it were) measure the “movement” (through disintegration and growth) of time. Similarly, on Kant’s cosmology, space without time would be nonsensical.

Freedom and Causal Necessity

“[I]f natural necessity is referred merely to appearances and freedom merely to things in themselves [].”

This position (again) unites Kant with Hume, who also believed that necessity is something that we (as it were) impose on the world.

Necessity only belongs to appearances, not to things-in-themselves. This can also be deemed a forerunner of the logical positivist idea that necessity is a result (or product) of our conventions, not of the world itself. Of course, just as conventions belong to minds, so too do appearances.

In Kant’s view, freedom (i.e., independence from causal necessity) is only found in things-in-themselves. That is, the substance of the mind is also a thing-in-itself. Therefore, the mind too is free from causal necessitation. The only things that are subject to causal necessitation are the objects of experience. Things-in-themselves are free.

Thus, Kant believed that he managed to solve a very difficult problem: the problem of determinism. In Kant’s picture, “nature and freedom” can exist together. Nature is not free. However, things-in-themselves (including the mind’s substance) are free. These different things can “be attributed to the very same thing”. That is, human beings are beings of experience, and also beings-in-themselves. The experiential side of human nature is therefore subject to causal laws, whereas the mind (or “soul”) transcends causal necessitation. We are, therefore, partly free, and partly unfree.

Kant has a particular way of expressing what he calls “the causality of reason”. Because reason is free, its cognitions and acts of will can be seen as examples of “first beginnings”. A single cognition or act of will is a “first cause”. In other words, it’s not one of the links in a causal chain. If it were a link in such a possibly infinite causal chain, then there would be no true freedom.

First beginnings guarantee persons freedom of the will and self-generated (or self-caused) cognitions. (In contemporary literature, such first beginnings are called “originations”.)

What does it mean to say that something just happens ex nihilo?

Would such originations therefore be arbitrary or even chaotic — sudden jolts in the dark of our minds? Wouldn’t they also be like the quantum fluctuations in which particles suddenly appear out of the void?

Why would such things guarantee us freedom, rather than making us the victims of chance or randomness?

Knowledge of Things-in-Themselves

Kant argued that we can’t know anything about things-in-themselves (in the singular German, Ding an sich), yet he also argued that “we are not at liberty to abstain entirely from inquiring into them”.

So can we have knowledge of things-in-themselves or not?

Perhaps Kant meant that although we can indeed inquire into things-in-themselves, nevertheless it will be a fruitless endeavour. Or perhaps Kant’s point was psychological: we have a psychological need to inquire because “experience never satisfies reason fully”. Alternatively, although our inquiries into things-in-themselves won’t give us knowledge, we can still offer conjectures or suppositions about them. That is, we can speculate about the true nature of things-in-themselves, even though we’ll never have knowledge (in the strict sense) of them. [Schopenhauer criticised this stance. See here.]

In Kant’s scheme, then, there are questions that will press upon us despite the fact that answers to them may never be forthcoming. Kant, again, cites his earlier examples of evidence- or experience-transcendent issues such as “the duration and magnitude of the world, of freedom or of natural necessity”. However, experience (alone?) lets us down on these issues. Reason, on the other hand, shows us “the insufficiency of all physical modes of explanation”.

Can reason truly offer us more?

Again, Kant tells us that we can’t be satisfied by the appearances. The

chain of appearances [] has [] no subsistence by itself [] and consequently must point to that which contains the basis of these appearances”.

Of course, it’s reason itself which will “hope to satisfy its desire for completeness”. However, reason can’t satisfy our yearnings by giving us knowledge of things-in-themselves. So “we can never cognise these beings of understanding”, but “must assume them”. However, it is reason that “connects them” with the sensible world (and vice versa). It must follow, therefore, that although “we can never cognise these beings of understanding”, there must be some alternative way (or ways) of understanding them.


*) Many of the quoted passages above are taken from Kant’s Prolegomena to Any Future Metaphysics.