Wednesday, 6 August 2014

Problems With Naturalised Epistemology: Reasons & Causes



Causal conditions in and of themselves can't give us justifications for a belief. They can, however, determine the nature of belief. As far as justification is concerned, we need to make the causal conditions justify our beliefs. That is, we need to say why or how such particular causal conditions have contributed to our true or false beliefs. 

As Donald Davidson said, "causation is not itself under an aspect". It doesn't explain or justify anything. It only does so in conjunction with the epistemic practices of epistemologists who make sense of causal conditions and who also offer us the reasons why particular causal conditions - rather than others - determine the truth of our beliefs rather than their falsehood.

Reliabilists and externalists say that causal mechanisms in the brain hook up with stimuli that produce belief. Though do they hook up with stimuli that cause true belief? What makes the stimulations, casual mechanisms or whatever else cause true belief? Indeed do we even know how they cause any kind of belief - true or false?

The brain scientist can see what’s going on in the brain; though he can’t see the relation between the brain and the mind that makes sense of what goes on in the brain and what the brain gives it. Causes are precisely that - causes. And reasons are reasons. Reasons are in the domain of mind and causes are in the domain of the brain and world. Of course reasons themselves may be dependent on causes; though this wouldn't mean that reasons are nothing over and above causes.

The brain scientist can never see why we believe that Tony Blair is a politician. He can't even see the belief that P. Tony Blair being a politician or not lying is outside the brain. However, the mind - though not the brain - can both register Tony Blair being a politician or not being a politician and make sense of whether he is or is not a politician.

The brain scientist can tell us what subserves such beliefs (or even what causal mechanisms lead to this belief). However, he can't tell us why or how we come to believe that Tony Blair is a politician. Indeed we would need to tell him what the belief is and where it came from in order for him to tell us which causal mechanisms and brain states subserve such beliefs and reasoning processes. Without such prior knowledge, the brain scientist would simply be mucking about in the brain not quite knowing what it is he's trying to find or explain.

Of course causal conditions in and of themselves can't tell us everything we need to know about knowledge and knowledge-acquisition. How can they? Such conditions need to be explained and interpreted. Not only that: they need to be questioned and criticised. These causal conditions don’t just enforce themselves on the mind of the investigator. And they certainly shouldn’t enforce themselves on the minds of epistemologists. If they did, then there would be no such thing as epistemology. And there would be no such thing as empirical investigation either.

To give a simple example.

Which causal conditions are we talking about? There is an indefinite number. And if we choose certain causal conditions, the epistemologist can then ask:


i) Why have you chosen to concentrate on these and not the many others?

ii) Why is this causal condition relevant and the one you ignored irrelevant?

iii) How do you know that these causal conditions give us knowledge and the many others don’t?

iv) How do you know that you're looking at them in the right way?

Epistemology is different to science or empirical investigation. However, does even the most naturalistic epistemologist think otherwise? The naturalist simply says that we must rely on - or even defer to - empirical investigation or simply use such findings. How could it be any other way? If this weren’t the case, then epistemology would not really have a subject matter to clarify and elaborate upon. The pure Cartesian epistemologist may survive without science. Though what sort of survival would it be? It's a kind of survival that's individualistic or subjectivist. That it, the Cartesian epistemologist relies on himself and himself alone, at least in principle. So what’s to stop him making massive mistakes and barking on many false trees? 

It's the Wittgenstein argument: if there’s no one available to tell him that he has gone wrong, then how does he no that he has gone right? The world is bigger than his own mind, no matter how great and systematic his mind is. 


The Causal Relations Between Mind & World




In the Cartesian tradition, the mind’s autonomy was of paramount importance. All we need to do, essentially, is get our internal workings functioning and in good order. It doesn’t really matter about the external world. Or, more correctly, it doesn’t matter until we had got our mental ship in order.


In epistemological externalism (as well as in reliabilism), on the contrary, what happens before the formation of a belief (or before cognitive operations as a whole) is what matters. The question is simple: 

How did I acquire this particular belief? 

So it's the required causal relation with the external world that determines whether or not a belief (or cognitive process) is justified. If it's acceptably justified, then we may have knowledge. We may not even be able to elaborate on the external causal processes that led to the formation of a belief. Though if we're in the right situation (vis-à-vis the world), then such processes which we may not be aware of will themselves somehow justify our beliefs.

Of course it's the case that we require causal relations with the external world when it comes to our perceptual knowledge. That is, in a purely empiricist manner, all we really need are reliable causal contacts with the world. Though, as Kant might have said: 

There's much more to knowledge than mere causal interaction with the world. 

Our minds and brains need to do something with all the incoming data – they need to synthesise it. That is the case even with basic perceptual beliefs.

Can we rely on perceptual information alone to make general statements about the world? That is, no single experience (or even a large group of experiences) alone will tell me that, say, that all swans are white.

And do we ever have a basic perceptual experience of someone not murdering someone else?

What about microscopic properties, objects or processes which are always beyond perceptual experience? What does pure experience tell us about these things?

What about numbers? They're supposed to be non-spatiotemporal abstract entities. How could I ever have perceptual experience of - or causal interaction with - numbers?

So not all beliefs and bits of knowledge are simply a question of reliable causal contacts with the concrete things that the beliefs and bits of knowledge are about. The traditional empiricist had no convincing answers to these epistemic problems. There are things about the mind-brain that work free of causal interactions. Or if there are causal processes in the mind-brain, then they're of such a kind that we don’t even know that they're happening. In that case, they have no epistemic relevance.

The internal world must have at least a little autonomy from the external world. Though perhaps we don’t necessarily need to think of it at all as the internal world. The mind-brain is indeed a part of the world. It is naturalistically acceptable. However, the mind-brain does things that other parts of the world don't do. It thinks about the world. It has representations and images of that world. It has intentionality – that is, directedness. It deals with meanings, concepts, truth, falsity and the rest.

Although the mind is part of the world, it's a special and unique part of it. 

We causally interact with the world, and that in turn sets of causal processes in the mind-brain: it doesn’t follow that there'll be some kind of isomorphic and infinitely repeatable set of mind-brain relations to that world. The same causal processes bring about different beliefs in different people - or even the same person at different times. Something goes on in mind-brains that can't be accounted for in the same way that things that go in the world can be accounted for. This is not a mystical conclusion. It may simply be a fact about the mind-brain’s astonishing complexity and subtlety. Indeed it has been said that the mind-brain is the most complex thing in the entire universe.


Monday, 4 August 2014

Putnam on Semantic Holism




Hilary Putnam (in this work) isn't talking about what words mean when taken separately from what he calls ‘interpretation’. He isn't even talking about the inferential or otherwise connections of words and their meanings to one another when taken separately from interpretation.

If Putnam is talking about interpretation, he's talking about how individual hearers - or even the sum of hearers or understanders - interpret the meanings of the words or sentences which they hear or read. On this account, then, a word doesn't have a meaning in glorious separation from all acts of interpretation or understanding; which is perhaps the position of Frege and so many other philosophers (perhaps Dummett too). Our interpretation of a word or a sentence will change "when we see more text" (229). Perhaps we couldn't even interpret a word or sentence unless we saw more text or heard more of what the speaker has to say. How could we interpret at all without something to help us with our interpretation? That something else, in Putnam’s words, would be ‘more text’. (It's interesting to see Putnam using Jacques Derrida's term ‘text’ rather than ‘utterance’, ‘word’ or ‘sentence’.)

Not only is the context of the text of importance when it comes to interpretation: we couldn't know what a word means unless we see that word used again and again in other contexts. Perhaps not only by the same speaker or in the same text; but also when used by other speakers and in other texts. After all, if the context of the word is the text in which it is embedded, then perhaps that text also has its context in a sum of other texts (some by the same author and some by other authors). Even if we gain access to the word or sentence’s meaning by hearing more of what the speaker says (or more of what is written in the text in which it is embedded), that greater knowledge will still be ‘finite’ and will thus it won't "infallibly show us what the word means" (229). There's also the possibility (or the likelihood) that the word will have been used "in some additional way that you haven’t taken account of" (229). In fact this is bound to be the case simply because our minds are finite in nature.

However, it's indeed strange that Putnam appears to be suggesting that one is required to know of every usage of a word before one can fully or accurately know that word’s meaning. Surely that can’t be right. Or, instead, we may well know its meaning; though that meaning is neither static nor determinate in that our acquiring new knowledge of other interpretations or usages of the word will have an effect on how we understand it. We don't need to know about every usage or utterance of a word in order to understand what it means. However, when we do acquire such additional knowledge, this will have an effect on how we understand the word. This, again, means that the word’s meaning is neither static in nature nor determinate. This doesn't stop us from using the word accurately or easily; nor does it stop us from understanding its meaning. It only stops us from believing that its meaning is static and determinate. Perhaps we simply don't require a word’s meaning to be static or determinate in order to communicate with ourselves and with others.

Think here of Wittgenstein’s ‘family resemblance’ argument in which he argues that different usages of the word ‘game’ don't all have an essence (as it were). Instead, each game has a family resemblance with each other game without that family resemblance needing to be grounded in an essence of games which must be shared by all games. There's no necessary and sufficient set of conditions that all games must share in order to be games or to be called a ‘game’. However, they may well share something with other games; though not with all other games. Similarly, all the uses of the word ‘game’ (or ‘liberty’ or ‘truth’ for that matter) must share something with at least some other usages of that name. Though it needn't share a determinate or static meaning with all these other usages. As long as it shares something with them - no matter how small.

This is why languages aren't static or determinate; unless they're artificial or Fregean languages! Much of what has just been said also implies that what matters is what we do with words and sentences (how we use them) – not necessarily (or only) what they mean. If that is indeed the case, then we can hardly expect a word to have the same meaning in all contexts or when used at different times and for different purposes.

This is the basic Wittgensteinian insight into words (or their meanings) and it takes us away from the Fregean position in which the sense behind words (or the Thoughts or propositions behind sentences) are both determinate and static. Perhaps only the meanings (if there are meanings at all) of the logical constants and other (logical) primitives are genuinely determinate or static. Though even here we can only define the logical constants in terms of what we can do with them (in the ‘implicit definitions’) and not in terms of their abstract meanings.

Putnam gives his own examples of this lack of determinacy or stasis when it comes to meanings. He says:

"… in American usage an armchair is a chair, but it’s not a chaise in French, it’s a fauteuil and it’s not a Stuhl in German, it’s a Sessel."

Perhaps this is more a question of linguistics or even lexicography than it is a question of semantics. However, Putnam began his career as a linguist so perhaps this connection isn't simply fortuitous. Perhaps the ‘meaning is use’ thesis entails a parallel commitment to the findings of linguistics or even to what the lexicographers say! Putnam himself says:

"One of my three majors in college was in linguistic analysis, it was the first department in the world. … it was a section of the anthropology department…" (229)
Here we don't only have a semantic holism that must incorporate linguistics: perhaps anthropology must be taken into account as well. That is, if we take our holism so far, perhaps we should take it even further into anthropology (as Wittgenstein is said to have done). And then perhaps into culture and history as a whole, as Rorty, Derrida and others have done. We can say, as the enemies of holism often do, that once we commit ourselves to holism (of whatever kind), then one's holism can't help but spread its wings farther and farther until, perhaps, we reach something like the Absolute of the 19th century Idealists. Well, that’s a thought at least.

More specifically, Putnam says that if one studies linguistics and/or anthropology, then "meaning holism is just forced on you" (229). In order to interpret language or words (or utterances) within a language, whether alien or our own, then one must interpret holistically or one can't interpret at all.

What's all this holism opposed to? According to Putnam:

"There are some accounts of meaning such as Fodor’s, according to which each word has one meaning which is fixed by its causal connection with a 'property', but that has nothing to do with the way words behave in a real language." (229)
Perhaps this is because Fodor is a Fregean at heart. Perhaps he too is committed to atomism simply because he believes, like Frege and Dummett, that atomism is the only way we can secure determinacy, the stasis of meaning, as well as its objectivity. That was, after all, precisely what Frege wanted from his semantics and thus from his ‘ideal’ (though artificial) language.

Though, as many philosophers have often said, this Fregean project essentially failed in its objectives; at least as far as the natural languages are concerned. Perhaps it didn't fail when it came to certain artificial languages (as Tarski would have acknowledged). But as Strawson and others have said, such artificial languages are woefully inadequate when it comes to the systematising of natural languages and also when it comes to everything that can be said or done within natural languages. What we are left with is a "mere skeleton of a language" and not a formalisation (or otherwise) of the language/s we use everyday of our lives.

Again, perhaps both Dummett and Fodor have simply not accepted the lessons given to us by the late Wittgenstein when he stressed the ‘meaning is use’ thesis, language-games and the anthropological realities of the natural languages. Indeed Dummett, for one, has often stressed his antipathy towards the late Wittgenstein; especially when it came to the ‘sceptical’ results of the latter’s theories about meaning.

Popper on Causal-Logical Necessity





 
It's often said that causal connections aren't logically necessary – not even necessary causal connections. This is the central gist of Hume’s position and from which he derived so many of his arguments about causation as a whole. Popper, on the other, did think that causal connections are logically necessary... but not so quick! They're only logically necessary


‘in the sense that they follow deductively once we assert the appropriate natural law’. (11)

If we don't assert the appropriate natural law, or any natural law, then we can't say that they are logically necessary. This is a classic case of the principle that no matter if a logical axiom or premise is true (that is, even if it is false), what follows from it will still follow deductively and validly from it (providing one’s inferences are valid).

Did Popper only mean ‘logically necessary’ in this amended sense? Could he have used the words ‘logically necessary’ in any other way? That is, any logical necessity that causal connections have are simply inherited from the natural laws from which they are derived or deductively inferred. We can't expect anything more about causal logical necessity than this.

Now we can express an example of a scientific law that is expressed in causal terms:

An event of type C has occurred. But whenever an event of type C occurs, an event of type E later occurs.

Again, this will only happen of necessity if we also assert the appropriate natural law relating cause C to effect E. Without that natural law (or its assertion/assumption), an event of type C could be followed by an event of type F, or by anything for that matter!

However, the Humean can still express his problems with Popper’s position. Dale Jacquette writes:

"A defender of Hume on the contingency of causal connections might nevertheless object that although the inference is deductively valid, and to that extent carries necessity from assumptions to conclusion, the conclusion itself is not necessary unless the assumptions are also logically necessary, and that no scientific laws correlating causes to effects are logically necessary." (11)

This means that it doesn't matter if the move from the assumptions to the conclusion is deductively valid if the assumptions themselves aren't logically necessary (they may not even be true to bring about deductively validity). After all, causal matters are about the world. They have nothing to do with deductively validity or deductive inference.

A Humean would argue that although the inference from the assumptions to the conclusion is logically necessary (in that if the assumptions are taken to be true this truth is passed on to the conclusion), the assumptions themselves aren't logically necessary.

Does this mean that the passage from event type C to event type E isn't logically necessary? Or that event type C, taken by itself, isn't logically necessary?

If event type C isn't logically necessary, then how can it pass on logical necessity to event type E?

So now we can say that not even the move from event type C to event type E is logically necessary if event type C isn't itself logically necessary.

Again, we aren't talking about formal or subject-less deductively validity or inference here, but an aspect of the world – causation!

To put the conclusion at its most basic. Hume argued that no scientific laws correlating causes to effects are logically necessary. That is the gist of Hume’s argument. Not that there is no such thing as causation or causal connection. Not even that there are no such things as causal regularities. Of course there are! No. His point is simple. There are no scientific laws correlating causes to effects. A scientific law is required to be both universal and exceptionless. On Hume’s empiricist grounds, we have no way of observing the universal or truly knowing that something is indeed exceptionless. It follows that no causal relation - say between event type C and event type E - can be deemed to be universal and exceptionless. Thus it can't instantiate or fall under a genuine natural scientific law. That is Hume’s point against scientific and rationalist views of necessary causal relations.

Jacquette puts much of the above in the following way:

"Again, it might be questioned whether the logical necessity obtaining between the assumptions and conclusion of a valid inference about real world events necessarily qualifies or attaches to the events themselves… " (11)

What we have here are two things:

i) The logical necessity obtaining between the assumptions and conclusion of a valid inference about real world events.


and

ii) The logical necessity obtaining between events in the real world or between event type C and event type E.

Clearly they aren't the same thing.

We can even say that i) is analogous to a de dicto modality, whereas ii) is analogous to a de re modality. One kind of necessity is about the form of an inference from assumptions to a conclusion. The other kind of necessity is said to obtain between events in the actual world. There is a world of difference between the two. The latter would be metaphysical or ontological necessity, whereas the former would be strict logical necessity. It appears, then, that Popper attempted to fuse logical necessity with metaphysical necessity.

Saturday, 2 August 2014

Syntax as a Basis For Language & General Intelligence






Some say that syntax is the basis of both human intelligence and consciousness itself. Others say that language is the basis of both these things. Yet syntax, of course, is the basis of language.

William H. Calvin describes syntax as “structured stuff”. Syntax is essentially about structure or the juggling about of things (as in sentences and words). His examples of “structured stuff” include “multistage contingent planning [whatever that is!], chains of logic, games with arbitrary rules” and so on.

I mentioned language being syntactical earlier. According to Calvin, language “might make a child better able to perform non-language tasks that also need some structuring”.

In other words, the juggling around of words (i.e, language) helps young children juggle other things around, such as literal physical objects. Thus actions using physical objects may be seen syntactically. 

For example, a child will use the same shapes to create different patterns or even to replicate real objects or animals. In this instance, shapes are like words and the final patterns or objects are like sentences.

The syntactical abilities of infants and young children is detailed by William H. Calvin. He chronicles the growing syntactic abilities this way:


i) “In the first year, an infant is busy creating categories for the speech sounds she hears.”

ii) “By the second year, the toddler is busy picking up new words, each composed of a series of phoneme building blocks.”

iii) “In the third year, she starts picking up on those typical combinations of words we call grammar or syntax.”

iv) “She soon graduates to speaking long structured sentences.”

v) “In the fourth year, she infers a patterning to sentences and starts demanding proper endings for her bedtime stories.”

As for i), I'm not entirely sure what Calvin means by “creating categories for the speech sounds she hears”. Does that simply mean recognising who or what is making the speech sounds? That is, one such “category” will be the mother?

Interestingly enough, even though in iii) it is said that the child picks up “what we call grammar or syntax”, it's clear that even before that (in ii)) that the baby has already become a syntax machine in that it is “picking up new words, each composed of a series of phoneme building blocks”. So, in this case, phonemes are juggled around to create new words; just as later words themselves are juggled around to create sentences (or at least proto-sentences). That means that just as phonemes are to words, so words are to sentences – both are equally syntactic. However, it seems possible from iii) that the child combines words without as yet “speaking the long structured sentences” of iv). That is, perhaps words are combined without thereby creating proper or meaningful sentences. This seems to be a kind of syntactic preparation for those later meaningful sentences.

It would seem to follow from all this that if a being doesn't have language, then there'll be lots of other things it can't do. Indeed that may even be true of language-less human beings.

This appears to have been the case when it came to a young boy who was deaf and who never learned a language – not even Sign Language. According to Oliver Sacks (quoted by Calvin), this boy


“seemed completely literal – unable to juggle images or hypotheses or possibilities, unable to enter an imaginative or figurative realm.... He seemed, like an animal, or an infant, to be stuck in the present, to be confined to literal and immediate perception”.

Clearly, juggling images, contemplating hypotheses and possibilities are all syntactical abilities. That is, one can juggle images one has seen in order to create new images. Similarly, known realities or facts can be juggled around to create possibilities or new hypotheses.

What's more, without syntactical skills, a being would also lack imagination and be unable to think figuratively. In the end, “like an animal, or a infant” (i.e., without syntactical thinking), a being would “be stuck in the present and confined to literal and immediate perception”.

As Calvin says, then, when a child gains syntactic skills, it ends up with the situation in which if it “[i]mprove[s] one”, it “improve[s] them all”.

All this syntactical playing or manipulation has its neurological underpinning, of course. The more the syntax engine of the brain is used, the stronger the neuronal connections which underpin such syntactical manipulations become. 

Calvin says that “prenatal connections” are strengthened or weakened “depending partly on how useful a connection has been so far in life”. The other thing is that the neurological underpinning is as it most “plastic” (or responsive) very early in life (which is why psychologists and behavioural scientists emphasis the importance of education in the early years). However, babies and young children are being well “connected” (neuronally) even if they aren't being stimulated (or educated) by their parents. In a certain sense, babies are programmed to learn.