Monday, 16 June 2014

Brain Languages/Grammars?






This is the kind of language often used by mind-as-computer-programme philosophers:

grammar : “the set of brain programmes by which sentences are generated...” - J.Z. Young
Immediately it can be seen that grammar is seen as being non-conscious or non-cognitive. That is, J.Z. Young isn't talking out the grammar which kids learn at schools (or don't learn at school). He's talking about grammar that is, as it were, built into the brain. Or grammar that's a result of the human evolution of the brain.

So firstly we have programmes in the plural; and they determine the grammar of the sentences which we speak. (Often from a very young age; which isn't surprising considering what's just been said.) In other words, we speak in the way we do because of the “programmes” which are somehow quite literally in the brain. Our brains literally make us speak in the way we do speak.

The point is that brain-grammar is not always the same as, say, the English grammar found in books and mouths. Nonetheless, it's surely been the case that English grammar has replicated brain-grammar because, after all, they're often, though not always, the same.

Despite all that, some scientists and philosophers argue that nothing purely in the brain (as it were) can be deemed a grammar. The main argument is that grammar, like language itself, is a social phenomenon and also a product of persons, not just of brains.

The mix-up may have come from computer-programme philosophers fusing codes and grammars. Yes, there may well be things which can be deemed as codes within the brain; though not grammars. (Can't a similar - or the same - argument by made for rejecting the notion of codes within the brain? After all, codes are also social and the products of persons, not brains.)

The basic argument is that codes don't have grammars. Therefore brains don't have grammars. However, couldn't a brain-code be the basis for a mind-grammar? That is, we firstly have codes in the brain; then those codes generate spoken- and thought-grammars.

It's argued that codes don't need grammars quite simply because they don't even need words. And words, of course, come along with grammars. (Perhaps even 'Stop!' has a hidden or elliptical grammar of sorts.)

So, on this argument, what is a code if it doesn't have a grammar?

Codes can be seen a ciphers. A code can - or does - encode what something else (say a text) means or says. The codes encodes the basic shape (as it were) of the text it encodes. There is a translation process from the text to the code. The code, no doubt, is simpler than the text.

For example, a code for the sentence “I'm going to kill you” could quite simply be a loud bang on a table. Of course the loud bang doesn't really contain that information. It only does so on the assumption that the interpreter knows what a loud bang means within a certain context. In other words, the code is given a meaning by persons. It only has meaning to those who know the cipher – the decipherers.

One well-known code is Morse code. Now you can put any text into Morse code – at least in principle. Of course only decipherers can decipher the cipher.

The machine that implements the Morse code doesn't understand the messages it's delivering. And neither does the machinery involved in an exchange of a phone-calls know what's being said. Still, that machinery is delivering the message from person A to person B. Everything is contained within the electricity and the machinery – except the understanding.

*) People often talk about anthropomorphism when it comes to animals; though not when it comes to computers. Yet anthropomorphism towards computers may be the cause of many philosophical and even technical problems.

For example, computer scientists talk about programmes “communicating” with with the computer. Roy Harris says that this is equivalent to saying that “I communicate with the light bulb by switching it on”. Yes, something physical and indeed causal has happened – but there has been no genuine communication. (Wouldn't that depend on the semantics of the word 'communication'?) For a start, the light bulb, and the switch, aren't persons. And even if they could use language (like a computer), they would still not be persons engaged in social interaction withing given contexts.

As a result of this anthropomorphism about computers, it follows that computer scientists call FORTRAN, BASIC, etc. “languages”. Though there are no genuine languages and no genuine communication involved, as already argued.

We could say, like John Searle, that it's “all syntax and no semantics”. These computer “languages” are nothing more than sets of rule-governed symbols which generate certain operations within a computer. It can be said that it's the shape, not the meaning/semantics, of those symbols that generates the operations of the computer. Again, there's no genuine semantics or “aboutness”.

Roy Harris puts this all-syntax-though-no-semantics argument (he also argues against Searle's position) by talking about symbols “designating” things. Symbols can only designate things if they have semantics or aboutness (intentionality) Or they only develop a semantics through human interpretation and, before that, programme design. In and of themselves, computer symbols have no semantics – only as-if-semantics (as Searle puts it).

In parentheses I mentioned that Roy Harris isn't happy with Searle's position.

Firstly he says that Searle argues that

“brain programmes are inadequate to explain mental events because the programmes themselves are definable in purely syntactic terms”.
He says that this position is “question-begging” because the same problems would manifest themselves if “programmes were defined in semantic terms too”.

For a start, I don't think Searle is talking about “mental events” generically; just those involving language or those that are about the world. Some mental events may not need a semantics.

To put this another way. A grammar isn't “just any set of rules or procedures”. And if a grammar isn't simply a set a rules, then a language can't be so either. After all, language is (partly) parasitical on grammar.

Basically, there's more to both grammar and language than symbols and rules. If this weren't the case, then computers would be genuine language-speakers. Not only that: analyzing symbols alone, even when encoded electrically (as in a telephone wire), would be enough to make us understand what's being communicated through those telephone wires. We wouldn't actually need to hear the conversation. We would only need to “read”  the electrical currents going through the wire.

*) You can see how far the idea of language/s in the brain has gone when neuroscientists and even philosophers attempt to map brain (or neural) happenings with spoken language or with grammar/parts of speech.

For example, take the sentence

“John hit Bill.”
 
It's quite possible that every time someone utters these words a particular and fully-specifiable neural event occurs. A firing pattern Y in brain area B, for example. (Due to reasons of holism, as well as externalism, it's unlikely that another person's utterance of that exact sentence will result in exactly the same neural pattern. Indeed even the same person uttering the same sentence at a different time may result in a sightly different neural pattern.)

In addition, say that the same person utters

“John didn't hit Bill.”

In that case, the negation of the earlier sentence may well result in the previous neural pattern being reversed. (Although it's hard to understand what that could mean – neurologically.)

Would all that mean that the neuroscienist had discovered the brain-grammar of negation or of the word 'not'? Would the brain be instantiating the logical/grammatical Rule of Negation? No. Why? Because that grammar could only be interpreted by the neuroscientist. And if it could only be interpreted by the neuroscientist, then it couldn't be a grammar at all.

It's often said that "correlation doesn't equal causation". Simply because X and Z often occur together, that doesn't mean that X caused Y or that Y caused X. They simply occur together. (I rise at the same time every morning as John Smith. However, he doesn't cause me to rise at that time and I don't cause him to rise at that time. There are connections, sure; just not causal ones.) Similarly there's a correlation between the uttered sentence “John hit Bill” and a particular neuronal pattern. However, the neural firing doesn't mean “John hit Bill” and it can't even be read as 'saying' the words “John hit Bill”. There is both causation and correlation in this case. (Unlike my rising at the same time as John Smith.) However, causation and correlation still doesn't mean that the neural happenings mean “John hit Bill”. If this were the case, according to Roy Harris, then

“the regular correlation between the states of the electrical device which ensures that the red, amber and green lights of a traffic signal come in the right order and the observable behavior of the traffic when they do so proved that the traffic signal must have internalised rules of the Highway Code” (513).

The basic point is that brains don't understand or even use these neuronal firings – persons do. And when it comes to traffic lights, the colours only have meaning to the persons who interpret or translate the colours. The brain on its own has no grammar or even a language – only persons do. Likewise, traffic lights don't understand signals or colours – pedestrians and car-drivers do.

Roy Harris puts all this in terms of the nature of persons and consequently the social reality of language. And here he sounds very much like the “ordinary language” philosophers who downplayed the previous tradition which believed that language is simply a set of rules or grammars. Here we must include what is called pragmatics and the ideas of philosophers such as J.L. Austin who talked about such things as "performatives” and whatnot. Or as Roy Harris puts it:

“For it is the decision to utter particular words at a particular time in a particular context which is the hallmark of human linguistic ability. And this requires situational judgements, communicative intentions, and self-awareness – all of which are properties of the human being, not of the human brain.” (510)

No comments:

Post a Comment