Friday, 6 November 2015

Thoughts on the Logic of Nineteenth-Century Mathematics


 
[Written circa 2005.]
 
One 19th-century view of a proposition is that it's simply the attribution of a quality to a subject. For example, the quality of being blind can be attributed to an individual man. Of course if we think so strongly in terms of qualities and subjects, then clearly we're committed to the ancient ontological distinction between qualities and subjects (or properties and objects).

The ‘new analytic’ moved away from this model. Within that position two classes of objects are identified, rather than qualities and subjects. Thus the notion of classes was brought into logic.

Thus we have these questions: Does class X belong to class Y? Or alternatively: How many subclasses belong to class Z? This thinking in terms of classes (rather than subjects and qualities) clarified and helped codify what many 19th century logicians were trying to do.

At that time logic was see as being purer than both geometry and mathematics. This was the case, so logicians thought, because logic didn’t concern itself with such things as space or quantities. Now, the mathematicians thought, we don't need to refer to anything like space or quantity. These are simply accretions or ways of simplifying mathematics and geometry. In fact mathematics, like logic before it, is essentially about nothing. Or, more certainly, it's not about the world and not even about the necessary features of the world. It's completely non-empirical. (Though clearly maths can be applied to the world.)

Thus mathematics was seen as the “science of order”. Maths isn't about things or processes: it's about the relations between all objects and processes. To put this bluntly: mathematics is the analysis of implications. That is, what implies what and why does this imply that? It isn't concerned with the things which imply one another; but with the implications themselves.

What is the nature of implication? Why does X imply Y? And if you're only concerned with implications (including inference, entailment, and consequence), then you're not concerned with truth. Truths, essentially, are exclusively about the empirical world (as the logical positivists later stressed). And because mathematics isn't concerned with the empirical world, then it's not concerned with truth either.

This parallels Wittgenstein’s distinction between truth and correctness. Put simply, something is correct if it abides by certain conventions. However, something may be true regardless of conventions. For example, there may be truths about how certain conventions have got things wrong.

Mathematics is a conventional phenomenon; therefore it's concerned with correctness and not truth.

Platonists believe that mathematics is concerned with truth because they have a view of mathematical objects which parallels the empiricist view of the relation of reference between names/statements/etc. and the world of concrete objects. To Plato, the things he was concerned about are abstract and non-spatiotemporal. However, there was still some kind of correspondence-relation between the statements of mathematics and the abstract objects to which they referred. The fact that these things are non-empirical didn’t mean that Plato jettisoned the old notion of correspondence. In Plato’s book, mathematical statements need to correspond with his ‘ideal objects’, whether that's the perfect circle or whatever.

Correspondence is a completely wrong way of looking at mathematics. Those that do so are seeing mathematics through the eyes of someone who thinks that it somehow matches or parallels the nature of empirical correspondence. What does matter in mathematics is what follows from certain postulates.

Thus in a sense it doesn't really matter about what these postulates are or what their nature is: what matters are the things we derive from them. Indeed if postulates can generate more fruitful theorems then, by definition, they may be better postulates. The truth of these postulates is completely irrelevant to mathematicians. If they're good tools with which we can derive a superabundance of theorems, then they're good postulates. However, they still aren't true postulates. They aren't even correct postulates. What is derived from them can either be correct or incorrect; though not true or false.

It follows from these facts that it doesn’t really matter what postulates (or axioms) a geometrical or mathematical system uses. Thus there can be alternative mathematical and geometrical systems. The nature of the postulates or axioms will determine the systems from which they're derived. If these postulates and axioms needn't be true, then we may have an indefinite number of geometrical and mathematical systems on our hands. If truth isn't the issue, then only correctness matters. And a system is correct if it contains no contradictions.

Thus we're concerned with validity, not truth. A conclusion can be validly derived and still be false. Validity is, therefore, system-relative. Something is valid if it doesn't create a contradiction within a given system. Truth, on the other hand, is purported to transcend all systems. Something is either true or false. Full stop. A statement doesn't need a system to validate (or justify) its truth. Just as someone can be uttering a truth outside all systems, so one can only be correct vis-à-vis a system and true outside of all systems.

Proof also became important in the new mathematics. Here too truth isn't relevant to proof. We can have a proof that doesn't depend on truth. The fact that it's a proof depends on the system to which it belongs. The fact that it's a proof depends on the nature of the other parts of the mathematical or logical system, not how it stands on its own. Proof is relative to the system in which it works as a proof. The proof utilises the very parts of the system it's trying to prove. If it were a truth about the system, on the other hand, it wouldn’t need to have any relation to that system. What is true is true regardless of its relation to the system. A proof, on the other hand, has a relation to the system in which it provides a proof.

Thursday, 29 October 2015

The Jungle of Rival Conceptual Schemes


[Written circa 2005.]

Tony le Mesma: What we all do, we are all channellers. We channel from within to without.
Alan Partridge: I
’m going to pin you down here. Can you be more specific?
Tony le Mesma:
I am a man who harnesses the harmony that is within us all.
Alan Partridge:
That’s more vague…I want you to be more specific.
Tony le Mesma:
We have within us a consciousness which is only partially realised.
Alan Partridge:
I think I know what you’re saying. Are you saying that if I, Alan Partridge, harnessed the harmony or spirits within me. And somehow channelled the energy up some kind of conduit of consciousness. A cloud…of…I’m sorry, I’ve absolutely no idea what I’m talking about. I’m completely lost.

- From
Knowing Me, Knowing You, with Alan Partridge


                                   ****************************************************

Can two competing conceptual schemes (CSs) both be true? Is truth defined internally to each particular CS?

William P. Alston [1979] said that it may be “a piece of outrageous imperialism” to reject the CSs (or Wittgensteinian “language-games”) of what he called “different spheres of discourse”. It's easy to find problems with such a tolerant pluralism because it seems to allow anything to be uttered (even things against tolerance and liberalism); so long as it belongs to a particular “discourse” and is (presumably) internally coherent. Wittgenstein, for example, was particularly keen on religious language-games.

There's a problem with religious discourse (or language-games). If a religion has its own discourse (which may well be self-contained, internally consistent and coherent), does that automatically mean that it does (or can) utter truths? Some coherentists will say, “Yes”. Thus one religion (or quantum physicist!) may say: Both A and not-A. Or, more likely, what if one religion says P and another religion says not-P? (For example, “God is always one” and “God is three in one”.)

Interestingly enough Alston gives various – acceptable? - examples of alternative discourses. They include “common sense talk about the physical environment”; “talk about personal agents”; “moral discourse”; “religious discourse”; “scientific theorising”; and “experiential reports”. The problem is that all these examples are not of a piece. Some are mutually contradictory (or potentially so). Others aren’t (necessarily) contradictory.

For example, “common sense talk”, “scientific theorising” and “experiential reports” needn't always contradict one another. (They may if you're an elimitivist; though not necessarily if you're simply some kind of reductionist.)

Here is Quine squaring two CSs:

“Here we have two competing conceptual schemes, a phenomenalistic one and a physicalist one. Which should prevail? Each has its own advantages…Each, I suggest, deserves to be developed. Each may be said, indeed, to be more fundamental, though in different senses: the one is epistemologically, the other physically, fundamental.”
        [1948]

It's an extremely tricky problem to determine whether or not alternative CSs contradict one another. Take “common sense talk” and “scientific theorising”. (It must be remembered that the former sometimes includes bits of the latter.) This example is indeed problematic. Of course certain examples of “religious discourse” clash violently with “scientific theorising”. Sometimes (or many times) certain examples of religious discourse clash with “common sense talk” too. Indeed quantum-physics talk and certain examples of “religious discourse” are of a piece when it comes to “common sense talk”: they both often clash with common sense.

We can, however, be liberal and tolerant – though only up to a point.

For example, there are no immediate problems with Nelson Goodman’s position in this intelligent passage:


“…many different world-versions are of independent interest and importance, without any requirement or presumption of reducibility to a single base. The pluralist, far from being anti-scientific, accepts the sciences at full value. His typical adversary is the monopolistic materialist or physicalist who maintains that one system, physics, is pre-eminent and all-inclusive, such that every other version must eventually be reduced to it or rejected as false or meaningless…The pluralists’ acceptance of versions other than physics implies no relaxation of rigor but a recognition that standards different from yet no less exacting than those plied in science are appropriate for appraising what is conveyed in perceptual or pictorial or literary versions.” [1978]

Elsewhere Goodman says that “we should take seriously the metaphors that artists use to restructure our worlds”.

However, Goodman wasn't saying that “anything goes”. He wasn't taking a position similar to Paul Feyerabend’s. (Did Feyerabend think that fascism goes?) Goodman was a mitigated pluralist. Indeed Goodman himself would have accepted reductionism within certain limits.

Thus artists can be physicalists too. And Goodmanian pluralism needn't – and didn't – countenance Feyerabend’s voodoo or Alston’s Wittgensteinian position on religion.

There’s a further problem with Wittgensteinian liberalism towards CSs. It means that there can be no scientific criticism (even after taking on board Goodman’s animadversions against “monopolistic materialism”) of religious language-games (or any language-games for that matter). No philosophical criticism either. Religion, or voodoo, is let off the hook. The Wittgensteinian view (taken further, perhaps, by Peter Winch and D.Z. Phillips) leaves all language-games beyond criticism not only from science, but from logic and philosophy too. That will be because nearly all such criticisms will evidently be coming from domains outside the religious language- games themselves.

Surely this is far too convenient. What if truth is truth in all domains and actually transcends them? What if truth doesn’t change its aspect simply because it's internal to different CSs?

Can a mystic, therefore, come along and say the following? -


On my view (derived from Wittgenstein), which is wide, coherent and consistent, cats are made of cheese.

This is why many people still admire and sympathise with the spirit (though not every letter) of logical positivism (as did Quine).

Again, we can accept different CSs (even competing CSs) as long as we aren’t too liberal and look out for contradictions between them. Putnam himself writes:


“…what is in one sense the ‘same’ world…can be described as consisting of ‘tables and chairs’…in one version and as consisting of space-time regions, particles and fields, etc. in other versions.” [1987]

Here’s Goodman again:


“Consider…the statement ‘The sun always moves’ and ‘The sun never moves’ which, though equally true, are at odds with each other…Rather, we are inclined to regard the two strings of words not as complete statements with truth-values of their own but as elliptical for some such statements as ‘Under frame of reference A, the sun always moves’ and ‘Under frame of reference B, the sun never moves’ – statements that may both be true of the same world.

“…If I ask about the world, you can offer to tell me how it is under one or more frames of reference; but if I insist that you tell me how it is apart from all frames, what can you say?…”
[1979]

Having sympathised with Nelson Goodman earlier, I’m not altogether happy with his position above.

For example, we could simply ask whether we're talking about the sun itself (and its movements) or the “frames of reference”. When we talk about one, we're not talking about the other. The sun always moves regardless of frames of reference. However, according to “frame of reference B” it “never moves”. This isn't to say that we can escape from “frames of reference” or “modes of presentation” (or even Fregean "senses"); though a distinction can be made nevertheless.

There are also simple cases of mutually-supporting “frames of reference”.

For example, with the naked eye we obviously can’t see particles or fields. With the naked mind we don’t talk about “space-time regions”. Within another CS we do observe particles with scientific equipment and so too with fields (if they are actually observed and aren't "posits"). With a scientific hat on, we talk of “space-time regions”. There’s no problem here, unless one is an elimitivist of some kind.

Alston himself does give an account of what may be incompatible language-games (or CSs); unlike the examples cited earlier which he thought are mutually autonomous. He wrote:


“The ontologies of different language-games do not all fit into any single scheme. There is no place in physical space for minds, sense-data, or God. Agency cannot be located in the interstices of the physiological causal network.” [1979]

In the above Alston appears to be saying that some CSs are wrong/false - or at least potentially wrong/false. Contrary to the views of Peter Winch and D. Z. Phillips, these CSs don't – or can't – exist in splendid isolation. They contradict each other. Eliminative materialists, for example, deny the existence of propositional attitudes. Therefore eliminative materialism is at war with folk psychology. Atheists deny the existence of God. Davidson denied sensory data. Some determinists reject Alston’s “agency”. And so on. Things aren’t, after all, so cosy.

Liberalism and pluralism have their limits. Few of us are pure liberals. Many of us are mitigated liberals (or mitigated pluralists).

So whereas Alston showed us his pluralism earlier on, he now shows us the limits of his own pluralism and tolerance. Thus if he can have his limits, other people can have their limits too. Consequently, the limits of other people may include limits on Alston’s pluralism (e.g. towards “religious discourse”); whereas Alston himself hints at the unacceptability of certain CSs (e.g. determinism, eliminative materialism, relativism, etc.).

Thus with all these limitations (from Alston and from everyone else), why talk about the “outrageous imperialism” of those who have a strong problem with certain CSs? After all, Alston also has a strong problem with certain CSs. So too, ironically, do conceptual relativists.

Let's go into a little detail here.

Relativists do indeed criticise other CSs.

For example, they don’t have much time for what they call “scientism” and “physicalism”. Yet according to their own standards, they aren't allowed to criticise alternative CSs. Scientism and materialism may be examples of CSs according to which the notion of truth is defined internally. They may be fairly self-contained or enclosed universes. Thus how can another CS criticise them if that very CS (i.e., relativism) is committed to the autonomy of even contradictory CSs? The conceptual relativist (or plain relativist) who appears (on the surface) to be committed to the autonomy of CSs (or language-games), must himself transcend his own CS and therefore take a bird’s-eye view of “scientism”, etc. - and he doesn’t like what he sees. Thus why should the fictional bogeyman - the Scientistic Philosopher - accept the relativist’s criticisms if, according to the relativist's own CS, truth isn't external to CSs? We could even say that the relativist is adopting an almost Nagelian metaphysical realism, without realising that he’s digging his own grave by doing so.

When we push absolute pluralists (towards CSs) far enough we always find their limits.

Pluralism is either self-defeating or self-contradictory. Just as in politics, pluralists can't tolerate anti-pluralists and mustn't be tolerant to the intolerant. Another way of putting this is that pluralism isn't (or shouldn't be) an absolute.

The more we analyse the reality of CS pluralism and tolerance the more we see that it doesn’t in fact exist. Therefore tolerance and pluralism may not be the virtues they seemed to be. Of course we're not talking politics here. In politics, pluralism and tolerance may indeed be commendable. However, an eliminativist, say, won't use his fists to enforce his CS on those who use, say, religious discourse. Thus the parallel with politics isn't entirely valid. Tolerance and pluralism in politics may be virtuous; whereas in philosophy they may be vices.

Thus William P. Alston's talk of “outrageous imperialism” (outside the domain of politics) may be misplaced. It may even be a Rylian “category mistake”.

References

Alston, William. P. (1979) 'Yes, Virginia, There is a Real World'.
Goodman, Nelson. (1978) Ways of Worldmaking.
Putnam, Hilary. (1987) 'Pragmatic Realism', in his The Many Faces of Realism.
Quine, W.V.O. (1948) 'On What There Is'.

Wednesday, 28 October 2015

Chalmers' Naturalistic Dualism vs. Dennett's Third-person Absolutism


 

"Dualists" and the so-called "mysterians" aren't the only people who believe that Daniel Dennett is a "scientistic philosopher" – Dennett thinks that about himself!

Dennett refers to his own overriding philosophical position as "third-person absolutism".

So what does a third-person absolutist believe?

According to David Chalmers, Dennett believes that “what is not externally verifiable cannot be real” [2010]. To be more explicit: there's a fundamental connection between any x being real (or existing) and whether or not we can “externally verify” that x. Thus, if we can't externally verify x, it doesn't exist. It's not real.

This is a position one might have expected from the logical positivists of the 1930s and 1940s. In addition, the words “third-person absolutism” are a good way of putting the stance of certain forms of behaviourism in the 1920s, 30s, 40s and 50s.

Dennett's third-person absolutism - which may water down his radical position - even deals with first-person phenomenological descriptions. It allows and encourages “a third-person perspective on one's first-person perspective” [1997]. Indeed if we follow the logic of Dennett's behaviourism to its end, can we accept a first-person perspective at all? (That is, even if that perspective is accounted for in third-person terms.) In other words, is there a first-person perspective on anything in Dennett's scientific book?

This position not only seems extreme: it also has the flavour of a diktat (if a normative scientific diktat). It's not unlike Karl Popper's falsificationism or the logical positivists' various principles of verification.

Dennett explains his third-person absolutism when he states the following:

"I wouldn't know what I was thinking about if I couldn't identify them by their functional differentia." [1996]

It's not surprising, then, that Dennett has asked David Chalmers


“to provide 'independent' evidence (presumably behavioral or functional evidence) for the 'postulation' of experience”. [2010]

Dennett's language here is ringing with scientific cliches. He talks of “independent evidence” and the “postulation of experience”. I wouldn't ordinary call the use of such scientific terms cliches. However, when it comes to discussing consciousness (or experience), the word cliche is surely apt. Or to put that another way: 


If we were talking about research into genetics or black holes, such words as “independent evidence” and “postulation” are certainly acceptable.

Thus the idea that consciousness is “postulated” is very strange. And that's why Chalmers says that consciousness “is a phenomenon to be explained in its own right”. Then again, if consciousness is behaviour plus functionality, then we do indeed have “independent evidence” for consciousness. Or as Chalmers expresses Dennett's position:

“[H]e thinks that the only sense in which people are conscious is a sense in which consciousness is defined as reportability, as a reactive disposition, or as some other functional concept.” [2010]

Of course this is simply to beg the question against consciousness being a phenomenon to be explained in its own right.

Chalmers goes further into Dennett's behaviourism (a word which Chalmers doesn't use here) when he says that (in Dennett's Consciousness Explained), “heterophenomenology” (like Quine's “overt behaviour”) is deemed to be “the central source of data” [1997]. He then says that 


“the only 'seemings' that need explaining are dispositions to react and report” [2002].

Thus Chalmers believes that it's an “unargued assumption that such reports are all that need explaining” [1997]. To top that: Dennett himself is quoted as saying that 


"'if something more than functions needs explaining, then materialism cannot explain it'". 

Chalmers, with added irony, says that he “would not disagree” with Dennett's account of materialism's possible failings. Dennett does say that it's a genuine threat to materialism and that's precisely why he fights the conclusion and comes out with so many “counterintuitive” (his own word!) positions.

Yes, Dennett sees consciousness, qualia and experience as a challenge to materialism. Others philosophers don't see such things that way. As for Chalmers, he does see such things that way. And that's why he writes the following:

“What's controversial about my own view is not so much that I defend the existence of qualia, but that I argue that they are nonphysical.” [1998]

It can also be said that functionalism serves Dennett's third-person absolutism: his third-person absolutism doesn't serves his functionalism. In other words, seeing things exclusively in terms of functions makes one's third-person fundamentalism purer and more complete (or “absolute”). Functionalism is a means to a third-person (therefore scientific) end.

Chalmers himself spots one problem with third-person functionalism when he says that “the idea that function is all we have access to at the personal level” is “false”. I would say: obviously false.

Chalmers as a Functionalist

On can get a measure of how complete Dennett's functionalism is (vis-a-vis consciousness) when Chalmers cites some examples of mental states which Dennett has given a functionalist explanation of. Chalmers writes:

“... it is far from obvious that even all the items on Dennett's list - 'feelings of foreboding', 'fantasies', 'delight and dismay' - are purely functional matters... One's 'ability to be moved to tears' and 'blithe disregard of perceptual details' are striking phenomena, but they are far from the most obvious phenomena that I (at least) find when I introspect.” [2010]

Prima facie, it does seem amazing that Dennett sees such things as solely functional matters. Indeed it's hard to understand what it could mean to say that the “ability to be moved to tears” is a purely functional matter. (What's with the word "ability"?)

The strange thing, however, is that Chalmers himself can be classed as a functionalist. Or at least as a mitigated functionalist. The problem is that Chalmers also believes that other things need to be added to the functionalist accounts of mind and consciousness.

For example, Chalmers says that he doesn't

“think that consciousness can be logically deduced from either structure or function, but it is still closely correlated with these things”. [1998]

Clearly that means that something non-functional (i.e., experience, qualia or experience) needs to be added into the functionalist pot.

Chalmers even goes so far as to say that he holds that “what matters is the functional organization”. Thus, if a


silicon system was set up so that its components interacted just like my neurons, it would be conscious just like me”. [1998]

Moreover, he states that

[a]ny two physically identical systems in the actual world will have the same state of consciousness, as a matter of natural law”. [1998]

Type-A & Type-B Materialism

Confusion is often created because what Chalmers calls “type-B materialists" don't deny that consciousness exists (as Dennett does). However, that's simply because 


“the term 'consciousness' is defined as something like 'reportability' or some other functional capacity”. 

In other words, to such a materialist, saying that consciousness exists is simply another way of saying that reportability, discrimination, internal access, etc. exist. What's more, according to Chalmers, type-b materialists also believe that

there is no interesting fact about the mind, conceptually distinct from the functional facts, that needs to be accommodated in our theories”. [1997]

Chalmers' “type-A materialists", on the other hand, believe that


there is not even a distinct question of consciousness: once we know about the functions that a system performs, we thereby know everything interesting there is to know”. [1997]

Dennett fits this latter description. Nonetheless, many fellow materialists claim that Dennett doesn't actually say that consciousness doesn't exist. Though, as just stated, that's because he believes that reportability, discrimination, etc. exist. (Though isn't this like someone saying that “God exists” and then it turns out that what he means by the word “God” is "The ghost who lives at the bottom of my garden"?)

Chalmers himself has Dennett down as a type-B materialist. Again, Chalmers doesn't claim that Dennett denies consciousness outright. Dennett simply states that the sum of mind-brain functions and behaviour are what constitute consciousness. Of course this, to many, still amounts to a complete denial of consciousness.

Chalmers' Naturalistic Dualism



One is tempted to think that the physicalists who class Chalmers as a “dualist” are effectively indulging in an ad hominem attack. In other words, it's almost a term of abuse. Nonetheless, Chalmers classes himself as a “dualist”; or, more accurately, his position is one of "naturalistic dualism".

Why “naturalist”?

Because Chalmers believes that mental states and consciousness itself are caused by physical systems.

So why “dualist”?

Because Chalmers believes that mental states - or consciousness generally - are ontologically distinct and also irreducible to the physical.


References

Chalmers, David. (2010) The Character of Consciousness

--- (1997) 'Moving Forward on the Problem of Consciousness'
--- (1998) 'An interview with David Chalmers', by David Chrucky
--- (2002] 'Conciousness and its Place in Nature'
Dennett, Daniel. (1991) Consciousness Explained 


Sunday, 18 October 2015

Paul Churchland on Non-Propositional Animal Thought?

Patricia and Paul Churchland
The philosopher Peter Geach hinted at the possibility of non-linguistic concepts when he wrote that the “ability to express a judgement in words thus presupposes a number of capacities previously acquired” [1958]. This is reminiscent of Paul Churchland's position. Churchland says, in greater detail, that


how to formulate, manipulate, and store a rich fabric of propositional attitudes is itself something that is learned…” [1981]

Again, elsewhere in the same paper:


“…language use is something that is learned, by a brain already capable of vigorous cognitive activity…language use appears as an extremely peripheral activity, as a species-specific mode of social interaction which is mastered thanks to the versatility and power of a more basic mode of activity. Why accept, then, a theory of activity that models its elements on the elements of human language?

The first quote above (from Churchland) would be given an immediate reply by a follower of Jerry Fodor. He'd say that the formulation, manipulation and storage of “a rich fabric of propositional attitudes” can be accounted for by something linguistic or at least something language-like: the language of thought. Thus we don’t escape from language here. Indeed, with more relevance to the issue of animals' non-linguistic concepts, Fodor says that the cognitive activity of animals could also be “linguaformal”.


Although Churchland may accept that the LOT could account for our learning (in the first place) how to “formulate, manipulate, and store a rich fabric of propositional attitudes”, his answer is that our learning to manipulate propositional attitudes is actually based on non-linguistic brain phenomena. To him, it's a question of the following:


“…a set or configuration on complex states…figurative ‘solids’ within a four- or five-dimensional phase space. The laws of the theory govern the interaction [“formulation”?], motion, and transformation [“manipulation”?] of these ‘solid’ states within that space…”

The point of bringing in Churchland is that - and we needn't accept his whole conceptual scheme - if he supplies us with possibilities/actualities of non-linguistic “cognitive activity”, then clearly this can be co-opted to show the same for non-linguistic concepts. (It’s a shame that Churchland himself doesn’t tackle concepts here.)


Fodor muddies the water by claiming that animal “cognitive activity” may also be “linguaformal”. The problem is that Fodor’s use of the word lingua (in what appears to be, inferentially, his acceptance of an animal Language of Thought) may be a use of a word that's so vague (vis-à-vis animal, not human, LOT) that it doesn’t satisfy any of the usual criteria for being a language.


Like Churchland, however, Fodor doesn’t say much about animals and non-linguistic concepts.


Now take Churchland’s reference to non-linguistic “representations” which can also be co-opted (to some extent) in order to talk about non-linguistic concepts. Here's Churchland talking about representations:

Any competent golfer has a detailed representation [concept] (perhaps in his cerebellum…) of a gold swing. It is a motor representation [concept]…The same golfer will also have a discursive representation of a gold swing (perhaps in his language cortex…)...” [1989]

And later:


A creature competent to make reliable colour discriminations has there developed a representation of the range of familiar colours, a representation that appears to consist in a specific configuration of weighted synaptic connections…This recognition depends upon the creature possessing a prior representation…This distributed representation is not remotely propositional or discursive…It…makes possible…discrimination, recognition, imagination…”

In a strong sense, what's been said above makes sense from an evolutionary perspective. At the level of species there must be a cognitive continuum between animal and human thought. On the scale of individuals, there's also a continuum between what can be called proto-thought and linguistic thought/verbal expression.


In the above, Churchland provides some of the scientific and philosophical details for such an argument; although, again, he doesn't tackle the subject of animal thought explicitly.

References

Churchland, Paul. (1981) ‘Eliminative Materialism and the Propositional Attitudes’
- (1989) ‘Knowing Qualia: A Reply to Jackson’
Geach, Peter. (1958) Mental Acts: Their Content and Their Objects

Fodor, Gerry. (1987) 'Why There Still Has to Be a Language of Thought'
 


Friday, 16 October 2015

My Dog Believes that x is F


According to Fred Dretske, concepts can be very rudimentary indeed. He writes:

One can…see armadillos without seeing that they are armadillos, but perhaps one must…see that they are (say) animals of some sort…in seeing an object [one must] see that it is an object of some sort. To be aware of a thing is at least to be aware that it is…how shall we say it?…a thing. Something or other.” [Dretske, 1993]

In the above, Dretske is getting down to bottom-line quasi-Kantian concepts. Dretske appears to argue that animals don’t even have such basic concepts. My position is that if we insist on seeing concepts as exclusively sentential or linguistic (in any vague sense), then we will, by definition, exclude animals from concept-use. I think that such a position is both arbitrary and unnecessary.

Here is Dretske making a distinction between awareness of facts and awareness of things:

Consciousness of facts implies a deployment of concepts. If S is aware that x is F, then S has the concept F and uses (applies) it [to x]… awareness of things (x) requires no fact-awareness (that x is F, for any F) of those things…there is no reasonably specific property F which is such that an awareness of a thing which is F requires fact-awareness that it is F.”

Perhaps all the above amounts to is that an animal (or  animals) hasn’t got our own concept of x (e.g., [human]). However, it may have its own. The concept, say, [human] may be dependent on sentential constructions; whereas an animal’s concept [C] may be based on (or partly made-up of) mental images [see Brian Loar, 1990].

How can an animal be aware of a thing (x) without having concepts which help it differentiate or individuate that thing? How can it be aware of x without the application of concepts to x? Doesn’t individuation and the application of concepts also entail (or at least imply) Dretske’s “fact-awareness”?

Of course “that P” isn't,  strictly speaking, applicable to animals if it's seen exclusively as a linguistic formulation (or a strict logical schema). Though need concepts and factual awareness be linguistic (or sentential) in form? Indeed couldn’t there be a non-linguistic version of  - or alternative to - that P? Can we really distinguish “awareness of things” from “awareness of facts”? Perhaps things are, in a sense, bundled facts; or at least they are determined or delineated factually or conceptually.

My parents’ dog is called Joe. He's aware of a thing, Paul Murphy, by being aware (non-linguistically) that P. (i.e., that x, in front of him, is Paul Murphy.) Of course he doesn’t know me as 'Paul Murphy'.

Dretske almost seems to be saying that unless one knows that P (or a non-formal, though linguistic, version of it), then one has no “fact-awareness”. This is reminiscent of Descartes’ exclusion of animals from rationality and even thought because they didn’t speak French; or, more fairly, because they didn’t speak any human language. This is Descartes himself on that subject:

“…it has never yet been observed that any animal has arrived as such a degree of perfection as to make use of a true language; that is to say, as to be able to indicate to us by the voice…anything which could be referred to thought alone, rather than to a movement of mere nature; for the word is the sole sign and the only certain mark of the presence of thought…” - Letter to Henry More (1649)

Dretske himself claims as much when he says that the

cat can smell, and thus be aware of, burning toast as well as the cook, but only the cook will be aware that the toast is burning”.

Yes, that last clause is indeed correct. Who would doubt it? The cat won't sub-vocally think to itself “the toast is burning”. Though it may have its own concept [B] for burning, its own concept [T] for toast, and even its own molecular concept  [BT] for burning toast.

The implication also is that the cat is aware that the toast is burning; though not “fact-aware” that the toast is burning. Of course the cat isn’t aware that the toast is burning in the sense of thinking “the toast is burning” in English, French or in any language (even its own). However, it might be fact-aware of something.

That can be explained in this way.

The cat isn’t aware that “the toast is burning”; though it is aware that the toast is burning. We can say that it's aware of the truth-condition (i.e., burning toast); though not that truth-condition's expression in a natural language (i.e., "the toast is burning").

Despite all the above, Dretske himself says (at the beginning of his paper) that he doesn’t “know about animals”. He may not know about animals in the plural; though he seems to know about the cat in his own example.

If mental-image-based concepts can “pick out a kind” (Loar, 1990), then perhaps some animals can't only pick out an x. They may also believe that x is F (or that P). In a sense, an animal needs to think that x is F in order to distinguish x from, say, y. Of course the belief that x is F isn't sentential, linguistic or even formal or quasi-linguistic. Why should it be? Why is it necessary to believing x is F that it's a (as it were) linguistic by-product?

Take Joe again.

I'm x to Joe. I'm an individuated object to him. That much, surely, must be uncontroversial. How does Joe individuate and differentiate me from, say, y (say, another person or even an inanimate table)? It must do so by believing that x is F (or x, not y, is F). That is, it believes that the object in front of it is Paul Murphy. However, Joe is also applying a concept [P] to a reoccurrence of x. The concept [P] is being applied to x. Therefore, to Joe, x is F.

The concept [P], or [Paul Murphy], may be an agglomeration of atomic concepts: [a certain smell,] [certain clothes], [a certain voice], etc., as well as - perhaps more importantly - a concept-image (i.e., a mental image used as the basis for a concept).

Thus x is F (for Joe the dog) means that it applies (or has already applied) the image-concept [I], plus various atomic concepts (including smells, dress, etc.) to x. x, in this case, is a re-occurrence of the object my family - though not the dog - calls 'Paul Murphy'. Therefore the variable x isn't identical to the dog-concept of x. The concept [P] is applied - or belong - to object x. For Joe, therefore, x is (indeed) F.

Even language-users may use (or have) such non-linguistic (or non-sentential) concepts. Either that, or non-linguistic mental phenomena may be the basis of linguistic concepts.

x is F is conceptual . It needn't be based - or reliant - upon other sentential formulations such as “You know Tony sometimes lies?” or “That guy is a dissembler”. However, the concept (or belief) needn't be purely abstract either. There’s nothing more real (to me) than my mental images of Tony and also of Tony lying. However, lying itself, admittedly, is necessarily linguistic. Thus Joe (or any animal) can't think that Tony (or anyone or anything else for that matter) is a liar. Lying comes with language use (though not deception – animals practice deception).

Thus I’m not saying anything in the above that's very surprising. There are some human concepts that animals not only don’t have; but which they could never have. And not just beliefs/concepts such as “4 + 4 = 8”: far more mundane and basic ones too.

What of a semi-complex beliefs/concepts such as [forests are full of trees]?

My dog-translation of the verbal locution “forests are full of trees” wouldn’t of course be an exact translation in the way “I love you” can be precisely translated into French. So what? Animals aren’t fellow human beings like the French. A dog’s version of “forests are full of trees” may come close (if only in a few respects) to our concept or belief. In any case, why would we require (or need) an exact equivalent (or translation) anyway? I don’t need one to argue my case. The point is that the dog’s version (or alternative) would still be conceptual. It would also be (by inference) a belief. It could even be (in a sense) an example (or version) of a predicate attached to a noun. Of course we'd need non-linguistic interpretations or equivalents of predicates and nouns. That’s not a problem: we have them in philosophy. The predicate “are full of trees” could become some kind of property or attribute of the subject-term “forests”. The noun “forest” could become some kind of non-linguistic subject (or object). Indeed Frege (for one) believed that predicates are concepts; just as others think that concepts are universals.

Now we’ve reached bedrock.

Thought Before Language?

Something (or some things) must come before our linguistic expressions: both as a species and as individuals. Our linguistic expressions didn’t occur ex nihilo

Here’s Paul Churchland making related points:

“…language use is something that is learned, by a brain already capable of vigorous cognitive activity; language use is acquired as only one among a great variety of learned manipulative skills; and it is mastered by a brain that evolution has shaped for a great many functions, language using being only the very latest and perhaps the least of them. Against the background of these facts, language use appears as an extremely peripheral activity, as a species-specific mode of social interaction which is mastered thanks to the versatility and power of a more basic mode of activity. Why accept, then, a theory of cognitive activity that models its elements on the elements of human language?” [1981]

I believe that certain animals have beliefs and concepts even though I also believe that language and language-based concepts utterly shape human thought and experience.

Joe’s x is F could be even more abstract. It could be x (Paul Murphy) is a F (human being). I needn't stress again that F (i.e., the dog-concept [H]) needn't be our concept [human being]? Though Joe may have some concept (or concepts) of a human being or of what we call human beings. It may notice that we only have two legs; that we don’t sniff each other’s backsides; and that we smell a certain way (i.e., unlike him). Even these atomic concepts of the molecular concept [human being] would need to be “translated” into dog-concepts.

Joe “picks out a kind”[Loar, 1990] - the kind human beings call human beings. More correctly, Joe picks out a particular (say, Jim) and sees that he belongs to the kind we call (though it doesn’t) human beings. At no stage of the game am I saying that Joe’s x is F is anything like our x is F. It's structural; though not linguistic or sentential. It may even have a similar structure in some broad sense.

In any case, how close must Joe’s x is F be to our x is F? More relevantly, why do we demand an exact parallel with our x is F in order to allow attributions of concepts and beliefs to dogs and other animals?

Many 18th-century Europeans didn’t think that central-African tribesmen had rationality (or even concepts, etc.) simply because they didn’t speak English or French and were a different colour. (Think here of the earlier Descartes quote.) Of course 18th century central-African tribesmen were far more intellectually advanced than any animal; though my point still hold. The Europeans demanded exactitude from the tribesmen in order to ascribe rationality or even thought to them. Thus it’s no wonder that some contemporary philosophers are looking for exact equivalences between themselves and other animals in order to allow animals the privilege of having or using concepts and beliefs. It is (dare I say) a kind of chauvinism similar in spirit to that which the functionalists (of mind) detected in the 1960s when they argued  that minds (qua minds) needn't share our (human) neurobiology.

Finally, I hope the reader doesn't get the wrong impression from all the above. I don’t think that animals are as clever as human beings... not even dolphins or apes. And I don’t want to say (here) whether or not these conclusions will have any impact on other issues related to animals, let alone on the issue of animal rights.

My claim is very simple: some animals do have concepts and beliefs.

References

Dretske, Fred. (1993) 'Conscious Experience'.
Loar, Brian. (1990) 'Phenomenal States'.