The articles and essays in this blog range from the short to the long. Many of the posts are also introductory (i.e., educational) in nature; though, even when introductory, they still include additional commentary. Older material (dating back mainly to 2005) is being added to this blog over time.
Friday, 16 October 2015
My Dog Believes that x is F
to Fred Dretske, concepts can be very rudimentary indeed. He writes:
can…see armadillos without seeing that they are armadillos, but
perhaps one must…see that they are (say) animals of some sort…in
seeing an object [one must] see that it is an object of some sort. To
be aware of a thing is at least to be aware that it is…how shall we
say it?…a thing. Something or other.” [Dretske, 1993]
the above, Dretske is getting down to bottom-line quasi-Kantian
concepts. Dretske appears to argue that animals don’t even have such
basic concepts. My position is that if we insist on seeing concepts as
exclusively sentential or linguistic (in any vague sense), then we
will, by definition, exclude animals from concept-use. I think that
such a position is both arbitrary and unnecessary.
is Dretske making a distinction between awareness of facts and
awareness of things:
of facts implies a deployment of concepts. If S is aware that
x is F, then S has the concept F and uses (applies) it
[to x]… awareness of things (x) requires no
fact-awareness (that x is F, for any F) of those
things…there is no reasonably specific property F which is
such that an awareness of a thing which is F requires
fact-awareness that it is F.”
all the above amounts to is that an animal (or animals)
hasn’t got ourown concept of x (e.g., [human]). However, it may
have its own. The concept, say, [human] may be dependent on sentential
constructions; whereas an animal’s concept [C] may be based on (or
partly made-up of) mental images [see Brian Loar, 1990].
can an animal be aware of a thing (x) without having concepts
which help it differentiate or individuate that thing? How can it be aware of x
without the application of concepts to x?
Doesn’t individuation and the application of concepts also entail (or at least imply) Dretske’s “fact-awareness”?
course “that P” isn't, strictly speaking, applicable to animals if
it's seen exclusively as a linguistic formulation (or a strict
logical schema). Though need concepts and factual awareness be
linguistic (or sentential) in form? Indeed couldn’t there be a
non-linguistic version of - or alternative to - that P? Can we really
distinguish “awareness of things” from “awareness of facts”?
Perhaps things are, in a sense, bundled facts; or at least they are
determined or delineated factually or conceptually.
parents’ dog is called Joe. He's aware of a thing, Paul Murphy, by
being aware (non-linguistically) that P. (i.e., that x,
in front of him, is Paul Murphy.) Of course he doesn’t know me as
almost seems to be saying that unless one knows that P (or a
non-formal, though linguistic, version of it), then one has no
“fact-awareness”. This is reminiscent of Descartes’ exclusion
of animals from rationality and even thought because they didn’t speak
French; or, more fairly, because they didn’t speak any human
language. This is Descartes himself on that subject:
has never yet been observed that any animal has arrived as such a
degree of perfection as to make use of a true language; that is to
say, as to be able to indicate to us by the voice…anything which
could be referred to thought alone, rather than to a movement of mere
nature; for the word is the sole sign and the only certain mark of
the presence of thought…”- Letter to Henry More (1649)
himself claims as much when he says that the
can smell, and thus be aware of, burning toast as well as the cook,
but only the cook will be aware that the toast is burning”.
that last clause is indeed correct. Who would doubt it? The cat won't
sub-vocally think to itself “the toast is burning”. Though it may
have its own concept [B] for burning, its own concept [T] for
toast, and even its own molecular concept [BT] for burning toast.
implication also is that the cat is aware that the toast is burning;
though not “fact-aware” that the toast is burning. Of course the
cat isn’t aware that the toast is burning in the sense of thinking
“the toast is burning” in English, French or in any language
(even its own). However, it might be fact-aware of something.
can be explained in this way.
cat isn’t aware that “the toast is burning”; though it is aware
that the toast isburning. We can say that it's
aware of the truth-condition (i.e., burning toast); though not that truth-condition's expression in a natural
language (i.e., "the toast is burning").
all the above, Dretske himself says (at the beginning of his paper)
that he doesn’t “know about animals”. He may not know about
animals in the plural; though he seems to know about the cat in his
mental-image-based concepts can “pick out a kind” (Loar, 1990),
then perhaps some animals can't only pick out an x. They
may also believe that x is F (or that P).
In a sense, an animal needs to think that x is F in order to
distinguish x from, say, y. Of course the belief that x
is F isn't sentential, linguistic or even formal or
quasi-linguistic. Why should it be? Why is it necessary to believing
x is F that it's a (as it were) linguistic by-product?
x to Joe. I'm an individuated object to him. That much,
surely, must be uncontroversial. How does Joe individuate and
differentiate me from, say, y (say, another person or even an
inanimate table)? It must do so by believing that x is F (or
x, not y, is F). That is, it believes that the
object in front of it is Paul Murphy. However, Joe is also applying a
concept [P] to a reoccurrence of x. The
concept [P] is being applied to x. Therefore, to Joe,
x is F.
concept [P], or [Paul Murphy], may be an agglomeration of
atomic concepts: [a certain smell,] [certain clothes], [a certain
voice], etc., as well as - perhaps more importantly - a concept-image
(i.e., a mental image used as the basis for a concept).
x is F (for Joe the dog) means that it applies (or has already
applied) the image-concept [I], plus various atomic concepts (including smells, dress, etc.) to x. x,
in this case, is a re-occurrence of the object my family - though not
the dog - calls 'Paul Murphy'. Therefore the variable x isn't
identical to the dog-concept of x. The concept [P] is
applied - or belong - to object x. For Joe, therefore, x
is (indeed) F.
language-users may use (or have) such non-linguistic (or
non-sentential) concepts. Either that, or non-linguistic mental phenomena may be the basis of linguistic concepts.
x is F is conceptual .
It needn't be based - or reliant - upon other sentential
formulations such as “You know Tony sometimes lies?” or “That
guy is a dissembler”. However, the concept (or belief) needn't be
purely abstract either. There’s nothing more real (to me) than my
mental images of Tony and also of Tony lying. However,
lying itself, admittedly, is necessarily linguistic. Thus Joe (or any
animal) can't think that Tony (or anyone or anything else for
that matter) is a liar. Lying comes with language use (though not
deception – animals practice deception).
I’m not saying anything in the above that's very surprising. There
are some human concepts that animals not only don’t have; but which they
could never have. And not just beliefs/concepts such as “4 + 4
= 8”: far more mundane and basic ones too.
of a semi-complex beliefs/concepts such as [forests are full of
dog-translation of the verbal locution “forests are full of trees”
wouldn’t of course be an exact translation in the way “I love
you” can be precisely translated into French. So what? Animals
aren’t fellow human beings like the French. A dog’s version of
“forests are full of trees” may come close (if only in a few
respects) to our concept or belief. In any case, why would we require
(or need) an exact equivalent (or translation) anyway? I don’t need
one to argue my case. The point is that the dog’s version (or
alternative) would still be conceptual. It would also be (by
inference) a belief. It could even be (in a sense) an example (or
version) of a predicate attached to a noun. Of course we'd need
non-linguistic interpretations or equivalents of predicates and
nouns. That’s not a problem: we have them in philosophy. The
predicate “are full of trees” could become some kind of property
or attribute of the subject-term “forests”. The noun
“forest” could become some kind of non-linguistic subject (or
object). Indeed Frege (for one) believed that predicates are
concepts; just as others think that concepts are universals.
we’ve reached bedrock.
(or some things) must come before our linguistic expressions: both as a species and as individuals.
Our linguistic expressions didn’t occur ex nihilo.
Paul Churchland making related points:
use is something that is learned, by a brain already capable of
vigorous cognitive activity; language use is acquired as only one
among a great variety of learned manipulative skills; and it is
mastered by a brain that evolution has shaped for a great many
functions, language using being only the very latest and perhaps the
least of them. Against the background of these facts, language use
appears as an extremely peripheral activity, as a species-specific
mode of social interaction which is mastered thanks to the
versatility and power of a more basic mode of activity. Why accept,
then, a theory of cognitive activity that models its elements on the
elements of human language?”
believe that certain animals have beliefs and concepts even though I
also believe that language and language-based concepts utterly shape
human thought and experience.
x is F could be even more abstract. It could be x (Paul
Murphy) is a F (human being). I needn't stress again that F
(i.e., the dog-concept [H]) needn't be our concept [human being]?
Though Joe may have some concept (or concepts) of a human being or of
what we call human beings. It may notice that we only have two
legs; that we don’t sniff each other’s backsides; and that we
smell a certain way (i.e., unlike him). Even these atomic concepts of
the molecular concept [human being] would need to be “translated”
“picks out a kind”[Loar, 1990]
- the kind human beings call human beings. More correctly, Joe
picks out a particular (say, Jim) and sees that he belongs to the
kind we call (though it doesn’t) human beings. At no stage of the
game am I saying that Joe’s x is F is anything like our x
is F. It's structural; though not linguistic or sentential. It
may even have a similar structure in some broad sense.
any case, how close must Joe’s x is F be to our x is F?
More relevantly, why do we demand an exact parallel with our x is
F in order to allow attributions of concepts and beliefs to dogs and other animals?
18th-century Europeans didn’t think that central-African tribesmen
had rationality (or even concepts, etc.) simply because they didn’t
speak English or French and were a different colour. (Think here of
the earlier Descartes quote.) Of course 18th century central-African
tribesmen were far more intellectually advanced than any animal;
though my point still hold. The Europeans demanded exactitude from
the tribesmen in order to ascribe rationality or even thought to them. Thus it’s no
wonder that some contemporary philosophers are looking for exact
equivalences between themselves and other animals in order to allow
animals the privilege of having or using concepts and beliefs. It is (dare I
say) a kind of chauvinism similar in spirit to that which the
functionalists (of mind) detected in the 1960s when they argued that minds (qua minds) needn't share our
I hope the reader doesn't get the wrong impression from all the
above. I don’t think that animals are as clever as
human beings... not even dolphins or apes. And I don’t want to say
(here) whether or not these conclusions will have any impact on other issues related to animals, let alone on the issue of animal rights.
claim is very simple: some animals do have concepts and