Thursday, 3 July 2014

The Being of Man




"The reality of the external world to which science points has no psychic depth, no depth of being. It is a plastic mass of events. When scientists study Man, they want to prove that the mind, the psyche, the being of Man, is the effect of bodily existence and thus an effect of matter. They conclude that if the mind is caused by matter, then it is basically unreal, secondary, not a primary reality." - Granth


I'm not sure if there is such a consensus in science on the mind. Even in the limited domain of 'materialist' philosophers of mind, I don't think that there is such a consensus.

Granth refers to "no psychic depth, no depth of being" of science. These technical terms seem to be taken from a specific philosophical tradition so it will be hard for people unfamiliar with that tradition to know what such locutions mean.

Granth also says that scientists (all of them?)

"want to prove that the mind, the psyche, the being of Man, is the effect of bodily existence and thus an effect of matter. They conclude that if the mind is caused by matter, then it is basically unreal, secondary, not a primary reality".
It doesn't follow that if a scientists argues, or shows, that "the mind is caused by matter" that he also believes that it is "unreal, secondary". A forest fire can be caused by a discarded cigarette but the fire is still real even if it has other causes. The mind and brain can even be acceptably different domains, according to scientists, and it still be the case that the brain, or something larger, is the "cause" of the mind. Scientists, on the whole, are no longer interested in erasing mind or consciousness from the equation. In fact, only a few scientists ever were completely that way inclined.

As for "the Being of man" - that seems to be the technical language of a specific philosophical tradition which, presumably, not all posters on this website will be aware of even if they know much philosophy. What is "the Being of man"?

Frank Ramsey's Paradox of Londoners & Their Hairs





Although Frank Ramsey's proof isn't exactly a paradox, it is worth mentioning anyway.

Ramsey set out to prove that there were exactly two Londoners with exactly the same number of hairs on their heads.

How did he prove that?

Firstly, when Ramsey was writing he noted that there were more than a million Londoners. He also noted - though God knows how - that there were less than a million hairs on any one Londoner's head.

So how do we move from those two truths to the truth that there are at least two Londoners who have exactly the same number of hairs on their heads?

Well, for a start, there are more Londoners than hairs on any single Londoner’s head. That is, there are more than a million Londoners; though no person has more than a million hairs on his or her head. So what? This has been expressed in the following way:

“If there are more pigeons than pigeonholes, then at least two pigeons must share a hole.”

This means that because there are more Londoners than there are hairs on any one individual's head, then at least two Londoners must share the same number of hairs. How does that prove what it claims to prove? And doesn't it depend on how many more (than a million) Londoners there are?
For example, will this proof still work if there were only one million and one Londoners? In addition, what about the possibility of massive coincidence or high proportions with large numbers of hairs or very small numbers of hairs? Doesn't the proof depend on a certain level of statistical balance?
The argument seems to be: 

i) Because there are over a million Londoners,
ii) though only a maximum of a million hairs on any one person's head,
iii) and since there are a million hairs to share between over a million people [eh? that's wrong],
iv) then two persons or more must share the same number of hairs.
Put it this way: if there are ten people in a room who have to share 9 or less apples, then two people will need to share an apple.

Put it another way: if there were a million Londoners and a maximum number of a million hairs on a Londoner’s head, then each Londoner could have hairs ranging from one to one million.

In theory at least, every Londoner could have a different number of hairs on his and her head. But what happens when there are more than a million Londoners but still only a maximum of one million hairs on any one Londoner’s head? It can't be the case that every Londoner will have a different number of hairs – even in theory. This will mean, then, that at least two Londoners will have the same number of hairs.

But there's still something suspect about this proof.

In any case, this doesn't seem to be a paradox, or that complex, or even that interesting; though perhaps I've missed the profound or complex bit.

The Expectancy Paradox







One scientist has a theory that people will countenance - or even commit - cruelty if it's approved by some kind of authority figure. That theory intuitively has a lot of plausibility. However, that's not the point of the paradox.

The said scientist carried out an experiment on ten people (i.e., subjects).

The experiment is to see if the ten subjects will press a button which will deliver an electric shock if ordered to do so by an authority figure (given a suitably rational or “scientific” reason to do so).

However, unbeknownst to him, it's the scientist who's the real subject of the experiment. That is, his ten subjects aren't being tested – he is! In fact the ten subjects know what's going on and are given fake electrical shocks. The scientist, on the other hand, doesn't know what's going on. He only knows about his own experiment; not the experiment upon him.

Again, this paradox isn't about the nature of meta-tests or even about why - or if - people really do commit acts of cruelty when told to do so by authority figures. This paradox is actually about the “fudge factor” or “experimenter bias effect". In other words, the scientist is testing his research students and these meta-testers are testing the scientist who's testing research students.


The phrases “fudge factor” and “experimenter bias effect” actually refer to the fact that when a researcher or scientist expects to “discover” or find a certain result, he's very likely to get that result. (Needless to say, when the research or science involves anything which is in any way political in nature, this is even more likely to be the case.)

In Professor Smith's case, his “experimenter bias” is leading him to expect (or want) his subjects to "turn Nazi” and be all too keen to press the electric shock button. It ends up that most of them do so.

Professor Arse, on the other hand, is expecting (or wanting) his subjects not to turn Nazi by being all too willing to press the electric shock button. Yes, most of the don't!

In other words, each scientist has set up the experiment to get the results he wants (for political, psychological, scientific, etc. reasons). In other words, both scientists are fudging - if at an unconscious level.

This bias (or fudging) is brought about in this case by Professor Smith shouting at his students/subjects to press the button. In other words, he wants them to press the button in order to “prove” his theory. Professor Arse, on the other hand, whispers the command to press the button because he doesn't really want them to.

It's said that neither scientist is aware of what he's doing. However, what they are doing is bringing about a self-fulfilling prophesy. (Despite that, I find it hard to accept that they literally know nothing about what they're doing because if that were truly the case, then they wouldn't be doing what they're doing, surely.)

In any case, these experimental and political biases of the two professors were themselves the subject of an experiment by other (meta) scientists. Yes, you guessed it: if the bias of the two scientists is proven or demonstrated by these meta-scientists, and they've also concluded that such bias is widespread (even) in science, then what of themselves? Are they also biased? Indeed was this experiment itself, on other scientists, a perfect example of the “experimenter bias effect” or the “fudge factor”? Are these meta-scientists as biased (or even more baised) than the scientists they were experimenting upon? And if not, why not?

The point is: just as the object-scientists were trying to do tests on people's reactions to the orders of authority figures; so the meta-scientists were testing the object-scientists in order to elicit the reality or nature of scientific bias. In other words, are these meta-scientists being biased about the nature or reality of scientific bias?

This problem can be put in its logical form:
i) The meta-scientists concluded that much - or all - scientific research or testing involves at least some element of bias.
ii) This piece of meta-science is also a piece or scientific research and testing. Therefore it must have contained at least some element of bias.

Though does scientific bias also mean scientific invalidity? If it does, then this piece of meta-science is scientifically invalid: just as invalid as the tests on the button-pressing subjects.

In addition, if scientific bias means scientific invalidity, does that also mean that scientists, and laypersons, have no reason to believe a word of what these meta-scientists have to say about their meta-scientific test or experiment? The problem is: intuitively both the object-test and the meta-test must contain at least some truth or accuracy! Indeed they may well contain a lot of truth or accuracy. This paradox seems to lead to the result that both tests should be rejected. Yet surely that can't be the case!

The other conclusion is that we must simply learn to live with a certain degree of scientific bias; just as we may well do in all other areas of life. Sure, if the bias is spotted, then rectify it. However, it is both misguided and even illogical to expect zero bias. In fact we may even say that it's unscientific to expect (or want) zero bias in science (or anywhere else for that matter).

William Poundstone suggests making a distinction between falsehood and invalidity. He seems to mean that even if bias is a fact: it's still the case that what the test or theory says in the end is simply either true or false. Or, as he puts it,

“if the experiment is merely invalid (through careless procedure, lack of controls, etc.), its result may be true or false”. (130)

Bias (or invalid procedures) may still lead to truth (just as true conclusion can be the result of bad reasoning or bad science (or no science)).

You can ask, however, that if there is bias (or an invalid procedure,) how could it possibly lead to truth? And if it did lead to truth, surely it would only do so accidentally or coincidentally.

For example, the Pythagoreans believed that the earth was a rotating globe. Though they believed that for all the wrong reasons.

However, we're talking about scientific experiments here, not Pythagorean speculation or philosophy. In a scientific sense, if an invalid procedure or bias leads to truth, that would be of little interest to science or scientists. Indeed typical scientists (if they exist) would be hard-pressed to make sense of an invalid procedure or bias leading to scientific truth. Though perhaps either they too are biased or their stance on science is philosophically naïve.
Poundstone offers two logical alternatives to this paradox.
i) We can assume that the meta-study is valid. (That is, what leads up to the results and the result itself are valid and sound/true.)
ii) Yet if it's true, then all (other) studies/tests are invalid and unsound.
iii) And if all tests are invalid, then this meta-test is also invalid and so can't produce a true result.

In other words, we're lead to the conclusion that we can't see its conclusion as true – even if we wanted to.
Alternatively:

i) We can assume that the meta-test is invalid.
ii) However, even though the meta-test can be seen as invalid, its result can still be seen as true.
iii) And, by inference, if the object-test can also be seen as invalid like the meta-test, then its results can also be seen as true (like the meta-test).

Here, then, we have two cases of invalid procedures leading to true results. Though, as I said earlier, what's the point of having scientifically true results (or theories) alongside scientifically invalid or unsound procedures/experiments/tests? And as I also said, scientists will surely claim that you can't have the mutual pair of invalid or biased procedures/tests alongside true results.... or can you?

Monday, 16 June 2014

Brain Languages/Grammars?






This is the kind of language often used by mind-as-computer-programme philosophers:

grammar : “the set of brain programmes by which sentences are generated...” - J.Z. Young
Immediately it can be seen that grammar is seen as being non-conscious or non-cognitive. That is, J.Z. Young isn't talking out the grammar which kids learn at schools (or don't learn at school). He's talking about grammar that is, as it were, built into the brain. Or grammar that's a result of the human evolution of the brain.

So firstly we have programmes in the plural; and they determine the grammar of the sentences which we speak. (Often from a very young age; which isn't surprising considering what's just been said.) In other words, we speak in the way we do because of the “programmes” which are somehow quite literally in the brain. Our brains literally make us speak in the way we do speak.

The point is that brain-grammar is not always the same as, say, the English grammar found in books and mouths. Nonetheless, it's surely been the case that English grammar has replicated brain-grammar because, after all, they're often, though not always, the same.

Despite all that, some scientists and philosophers argue that nothing purely in the brain (as it were) can be deemed a grammar. The main argument is that grammar, like language itself, is a social phenomenon and also a product of persons, not just of brains.

The mix-up may have come from computer-programme philosophers fusing codes and grammars. Yes, there may well be things which can be deemed as codes within the brain; though not grammars. (Can't a similar - or the same - argument by made for rejecting the notion of codes within the brain? After all, codes are also social and the products of persons, not brains.)

The basic argument is that codes don't have grammars. Therefore brains don't have grammars. However, couldn't a brain-code be the basis for a mind-grammar? That is, we firstly have codes in the brain; then those codes generate spoken- and thought-grammars.

It's argued that codes don't need grammars quite simply because they don't even need words. And words, of course, come along with grammars. (Perhaps even 'Stop!' has a hidden or elliptical grammar of sorts.)

So, on this argument, what is a code if it doesn't have a grammar?

Codes can be seen a ciphers. A code can - or does - encode what something else (say a text) means or says. The codes encodes the basic shape (as it were) of the text it encodes. There is a translation process from the text to the code. The code, no doubt, is simpler than the text.

For example, a code for the sentence “I'm going to kill you” could quite simply be a loud bang on a table. Of course the loud bang doesn't really contain that information. It only does so on the assumption that the interpreter knows what a loud bang means within a certain context. In other words, the code is given a meaning by persons. It only has meaning to those who know the cipher – the decipherers.

One well-known code is Morse code. Now you can put any text into Morse code – at least in principle. Of course only decipherers can decipher the cipher.

The machine that implements the Morse code doesn't understand the messages it's delivering. And neither does the machinery involved in an exchange of a phone-calls know what's being said. Still, that machinery is delivering the message from person A to person B. Everything is contained within the electricity and the machinery – except the understanding.

*) People often talk about anthropomorphism when it comes to animals; though not when it comes to computers. Yet anthropomorphism towards computers may be the cause of many philosophical and even technical problems.

For example, computer scientists talk about programmes “communicating” with with the computer. Roy Harris says that this is equivalent to saying that “I communicate with the light bulb by switching it on”. Yes, something physical and indeed causal has happened – but there has been no genuine communication. (Wouldn't that depend on the semantics of the word 'communication'?) For a start, the light bulb, and the switch, aren't persons. And even if they could use language (like a computer), they would still not be persons engaged in social interaction withing given contexts.

As a result of this anthropomorphism about computers, it follows that computer scientists call FORTRAN, BASIC, etc. “languages”. Though there are no genuine languages and no genuine communication involved, as already argued.

We could say, like John Searle, that it's “all syntax and no semantics”. These computer “languages” are nothing more than sets of rule-governed symbols which generate certain operations within a computer. It can be said that it's the shape, not the meaning/semantics, of those symbols that generates the operations of the computer. Again, there's no genuine semantics or “aboutness”.

Roy Harris puts this all-syntax-though-no-semantics argument (he also argues against Searle's position) by talking about symbols “designating” things. Symbols can only designate things if they have semantics or aboutness (intentionality) Or they only develop a semantics through human interpretation and, before that, programme design. In and of themselves, computer symbols have no semantics – only as-if-semantics (as Searle puts it).

In parentheses I mentioned that Roy Harris isn't happy with Searle's position.

Firstly he says that Searle argues that

“brain programmes are inadequate to explain mental events because the programmes themselves are definable in purely syntactic terms”.
He says that this position is “question-begging” because the same problems would manifest themselves if “programmes were defined in semantic terms too”.

For a start, I don't think Searle is talking about “mental events” generically; just those involving language or those that are about the world. Some mental events may not need a semantics.

To put this another way. A grammar isn't “just any set of rules or procedures”. And if a grammar isn't simply a set a rules, then a language can't be so either. After all, language is (partly) parasitical on grammar.

Basically, there's more to both grammar and language than symbols and rules. If this weren't the case, then computers would be genuine language-speakers. Not only that: analyzing symbols alone, even when encoded electrically (as in a telephone wire), would be enough to make us understand what's being communicated through those telephone wires. We wouldn't actually need to hear the conversation. We would only need to “read”  the electrical currents going through the wire.

*) You can see how far the idea of language/s in the brain has gone when neuroscientists and even philosophers attempt to map brain (or neural) happenings with spoken language or with grammar/parts of speech.

For example, take the sentence

“John hit Bill.”
 
It's quite possible that every time someone utters these words a particular and fully-specifiable neural event occurs. A firing pattern Y in brain area B, for example. (Due to reasons of holism, as well as externalism, it's unlikely that another person's utterance of that exact sentence will result in exactly the same neural pattern. Indeed even the same person uttering the same sentence at a different time may result in a sightly different neural pattern.)

In addition, say that the same person utters

“John didn't hit Bill.”

In that case, the negation of the earlier sentence may well result in the previous neural pattern being reversed. (Although it's hard to understand what that could mean – neurologically.)

Would all that mean that the neuroscienist had discovered the brain-grammar of negation or of the word 'not'? Would the brain be instantiating the logical/grammatical Rule of Negation? No. Why? Because that grammar could only be interpreted by the neuroscientist. And if it could only be interpreted by the neuroscientist, then it couldn't be a grammar at all.

It's often said that "correlation doesn't equal causation". Simply because X and Z often occur together, that doesn't mean that X caused Y or that Y caused X. They simply occur together. (I rise at the same time every morning as John Smith. However, he doesn't cause me to rise at that time and I don't cause him to rise at that time. There are connections, sure; just not causal ones.) Similarly there's a correlation between the uttered sentence “John hit Bill” and a particular neuronal pattern. However, the neural firing doesn't mean “John hit Bill” and it can't even be read as 'saying' the words “John hit Bill”. There is both causation and correlation in this case. (Unlike my rising at the same time as John Smith.) However, causation and correlation still doesn't mean that the neural happenings mean “John hit Bill”. If this were the case, according to Roy Harris, then

“the regular correlation between the states of the electrical device which ensures that the red, amber and green lights of a traffic signal come in the right order and the observable behavior of the traffic when they do so proved that the traffic signal must have internalised rules of the Highway Code” (513).

The basic point is that brains don't understand or even use these neuronal firings – persons do. And when it comes to traffic lights, the colours only have meaning to the persons who interpret or translate the colours. The brain on its own has no grammar or even a language – only persons do. Likewise, traffic lights don't understand signals or colours – pedestrians and car-drivers do.

Roy Harris puts all this in terms of the nature of persons and consequently the social reality of language. And here he sounds very much like the “ordinary language” philosophers who downplayed the previous tradition which believed that language is simply a set of rules or grammars. Here we must include what is called pragmatics and the ideas of philosophers such as J.L. Austin who talked about such things as "performatives” and whatnot. Or as Roy Harris puts it:

“For it is the decision to utter particular words at a particular time in a particular context which is the hallmark of human linguistic ability. And this requires situational judgements, communicative intentions, and self-awareness – all of which are properties of the human being, not of the human brain.” (510)

Saturday, 7 June 2014

Causation & Necessity in Hume’s Treatise




i) Introduction
ii) Necessity and Empiricism
iii) Custom and Worldly Necessity
iv) Necessity: A Cartesian Detour

In his A Treatise of Human Nature, David Hume gives us two definitions of cause. The first definition refers to external objects:

"We may define a cause to be ‘An object precedent and contiguous to another, and where all the objects resembling the former are placed in like relations of precedency and contiguity to those objects that resemble the latter.'”

The second definition is psychological in character:

"A cause is an object precedent and contiguous to another, and so united with it that the idea of the one determines the mind to form the idea of the other, and the impression of the one to form a more lively idea of the other."

It will become clear later that Hume thinks that the first definition above is parasitical on the second.

Necessity and Empiricism

Hume was an empiricist (i.e., a believer that all our knowledge ultimately comes from experience). He therefore looked for the source of necessity in what he called our (sense) “impressions” (i.e., in our experience). He believed that it was commonly thought that there were necessary connections between causes (of a certain kind) and effects (of a certain kind). He asked us: From where do we get this idea of necessary connection? His conclusion was that we don’t have such an “idea”. (In Hume’s philosophy, “ideas” are essentially “copies” of “impressions”.) More accurately, we don’t receive any sense impressions of necessary connections between causes and effects from which we derive ideas. According to Hume, all we see is that “the object we call cause precedes the other we call effect”. That is, we don’t see a third thing that necessarily connects the cause with the effect. Again, according to Hume, there is no third relation “between cause and effect”.

So what accounts for our belief in the necessary connection between causes and effects?

Hume goes into detail as to why we have (literally) no idea of a necessary connection between cause and effect. He elaborates with an empiricist critique of the very idea of necessity in the external world.

Firstly he says that “reason alone can never give rise to any original idea”. This is Hume’s way of saying that the rationalist doesn't see - literally see - necessity through his use of “pure reason” (to use Immanuel Kant’s words). We have no innate ideas of necessity or necessary connections either. Again, all ideas are mere copies of (sense) impressions. So if we have the idea of “efficacy”, then that “idea must be derived from experience”.

All this is part of Hume’s dismissal of the 17th century rationalism of, amongst others, Descartes, Spinoza and Leibniz. (In fact, by the time Hume wrote his Treatise, he felt confident enough to write that the principle of innate ideas - a rationalist favourite - had already been refuted.)

Custom and Worldly Necessity

Again, according to Hume, the idea of necessary connection “arises from the repetition of [two objects’] union”. However, that repetition “neither discovers nor causes anything in the objects”. The necessities are “consequently qualities of perceptions, not of objects”. More precisely, Hume says that necessity “is something that exists in the mind, not in objects”. Even the necessity of arithmetic and geometry is to be found in the mind. Hume wrote:

[T]he necessity, which makes two times two equal four, or three angles of a triangle equal to two right ones, lies only in the act of the understanding, by which we consider and compare these ideas…"

So Hume believed that the source of necessity (vis-à-vis cause and effect) is to be found in human minds. That is, it is through the customary experience of a cause being followed by a particular effect that we come to believe in a necessary connection between the two. Or:

[A]fter frequent repetition, [we] find that upon the appearance of one of the objects the mind is determined by custom to consider its usual attendant [i.e., effect]…”

Hume believed that our ideas of necessity or necessary connection come from our understanding of our internal mental acts or volitions. Hume wrote:

"Some have asserted that we feel an energy or power in our own mind; and that, having in this manner acquired the idea of power, we transfer that quality to matter…The motions of our body…obey the will; nor do we seek any further to acquire a just notion of force or power."

Despite all the above, Hume believed that the external problem of necessity (or “power”) is simply replicated in terms of minds. And as with matter (or necessarily connected objects) “a [mental] cause has no more a discoverable connection with its effects than any material cause has with its proper effect”. Hume continued:

"In short, the actions of the mind are…the same with those of matter."

Again, all we perceive are mental “constant conjunctions”. Hume talked about “internal impression” rather than external impression. And, as with external causes and effects, “we should in vain hope to attain an idea of force by consulting our own minds”.

Where do we think we get the idea of necessity - or of a necessary connection - from?

Necessity: A Cartesian Detour

Hume was at one with the Cartesians who believed that matter “is endowed with no efficacy”. However, we do indeed perceive motion and causation, therefore, according to Hume, the Cartesians concluded that “the power that produces [efficacy] must lie in the Deity”. But just as Hume believed that there is neither necessity in the world nor in the mind (though there is therein a false perception of necessity), so he also concluded that because of a lack of empirical impressions of necessity found in the Supreme Being, necessity can’t be found in the Supreme Being either.

It's not causation that Hume is denying, only necessary causation. When Hume emphasises the psychological nature of necessity, he wan't thereby denying causation or saying that it is mind-dependent.

Kant actually defended Hume against charges of idealism:

"The question was not whether the concept of cause was right, useful, and even indispensable for our knowledge of nature, for this Hume had never doubted; but whether that concept could be thought by reason a priori [therefore necessary]."

Hume himself was very clear about this. He wrote:

[T]hat the operations of nature are independent of our thought and reasoning, I allow it.."

However, Hume continued thus:

"But if we go any further, and ascribe a power or necessary connection to these [operations], this is what we can never observe in them…"

It is the mind that projects necessity into the external empirical world.

So we see that objects are always connected as cause to effect. Then we assume that this relation is necessary. But, as Hume argues, we can never have any idea of this necessary connection or power between them.

There are a host of other words used in everyday language which showed Hume that we believe in the necessary connection of certain causes with certain effects. He gives the examples of “efficacy”, “agency”, “power”, “force”, “energy” and so on. But, according to Hume, these terms are all closely connected to what we deem to be necessity and so they don’t really explain the concept of a necessary connection.