Tuesday 5 January 2021

Do Animals Have Concepts?



There’s been a long-running and controversial debate as to whether or not animals are conscious (or have experiences). I believe that it’s most certainly the case that animals are conscious — or at least the “higher animals” are. Of course that claim can’t be proved. That said, I can’t prove that my friends are conscious. So there are no proofs — or even conclusive demonstrations — when it comes to whether any given biological animal — including a human being — is conscious or not.

There’s also the debate as to whether or not animals have beliefs, are intelligent, are capable of thought, etc. This often — mainly — depends on definitions. That is, if the word “belief” (or “thought”) is defined in one way, then animals will be deemed to have beliefs (or to think). If, on the other hand, such a word is defined in another way, then animals won’t be deemed to have beliefs (or to think).

This piece is more basic than all that. It asks this question:
Do animals deploy — or have — concepts?
But what, exactly, are concepts?

What are Concepts?

There are a few philosophical accounts of concepts. That said, in what follows it doesn’t matter (too much) which account is accepted.


It seems that all — or at least most — accounts of concepts involve recognising a level of abstraction. That is, concepts are seen as general notions or abstract ideas which occur in thought, speech or in the mind.

In more detail, concepts are often seen — by philosophers at least — to be mental representations which exist in the mind and brain — either in (as it were) storage (i.e., in the brain) or actually deployed in mental activity (or cognition). In other words, a being’s concepts (like beliefs) needn’t be “used” or deployed at all — they can simply be stored in the brain. However, they do need to be deployable in principle.

In philosophy, concepts have also been seen in purely abstract terms. That’s in the sense that concepts are seen to have no direct relation to mentality or to biological brains — except for the fact that brains (or minds) can gain access to them. (In this sense, concepts are like numbers in mathematical Platonism.) One very specific — but important — take on this is to see concepts as Fregean senses (see sense).

Concepts can also be seen as abilities (or skills) in that they’re measured — or defined — in terms of the behaviour of the animals which are believed to have — or deploy — them. This is clearly related to a behaviourist account of concepts in that, like intelligence, concepts must be manifested in abilities or in behaviour generally. This means that concepts aren’t something hidden in the brain (or in the mind). In other words, concepts aren’t either ghosts in the machine or abstract objects (as with Fregean senses). If this particular position had been entirely adhered to, then the title of this piece should have been: ‘Do Animals Deploy Concepts?’.

More relevantly to what follows, concepts are said to be the fundamental constituent of beliefs, thoughts, languages, etc. It’s this aspect of concepts which problematises the issue when it comes to animals. That’s because concept-use seems to involve a high level of cognitive activity and mental development.

Concepts and Language

The British philosopher Michael Tye argues that

“[h]aving the concept F requires, on some accounts, having the ability to use the linguistic term ‘F’ correctly”.
That depends on what precisely the concept F is taken to be. If the concept is [infinity], then, yes, we would need “to use the linguistic term”. However, this isn’t the case if, say, a dog’s concept is [cat], and [cat] isn’t — at all — tied to the word “cat”.

Of course in the above I’m using the word “cat” within square brackets simply because it’s shorthand for whatever constitutes a dog’s concept [cat]. So perhaps I should symbolise it [C] instead. The problem with this is that readers won’t know what object the dog’s concept is about. Therefore I use the English word within the brackets.

In any case, if Tye had simply said
Having concepts requires the ability to use linguistic terms correctly.
then he would have been, I believe, wrong.

Following on from the first quote above, Tye says:
“On other accounts, concept possession requires the ability to represent in thought and belief that something falls under the concept.”
There’s no problem with the above just as long as the words “thought”, “belief” and “falls under a concept” aren’t taken sententially (or linguistically). And there’s no obvious — or immediate — reason why they should be. A dog must surely “think” that the rattling dog-chain “means” that it will be going for a walk. It will “believe” that a walk will be forthcoming. But, of course, none of this is expressed — or thought about — linguistically by the dog. And, precisely because of that fact, some (or even many) philosophers believe that it’s wrong to use words like “think”, “belief” and “means” when discussing the mental reality of dogs and other animals.

That said, the dog’s chain must surely “fall under the concept” [dog chain] for the dog. Or, again, instead of using the English words “dog chain” — which will be unknown to the dog —the dog’s concept will fall under [C]; where C (again) simply symbolises whatever constitutes the dog’s concept.

On the words “think”, “means”, “belief”, etc. again.

Some — or even many — philosophers are sceptical about animal thought and belief. Take this unelaborated description of a monkey’s behaviour:
“[W]hile we may be prepared to say that it knows [that it’s safe up a tree], we may be less happy to say that the monkey thinks that it is safe.”
Prima facie, how can the monkey know without thinking?

This writer also includes reasoning, believing, reflecting, calculating and deliberating as examples of thought. I believe that monkeys do all these things. Admittedly, at first I hesitated with the word “calculating”. However, that was only because I over-sophisticated the term by thinking in terms of abstract mathematical calculations. (Since the main topic here is concepts, I can’t go into these broader areas of animal and human thought.)

An interesting question remains, however. Is it an animal’s lack of concepts that excludes it from all these cognitive states? Or is it that animals have no concepts because they can’t think?

This is where the notion of non-conceptual content is helpful. That is, animals may have non-conceptual mental states which still, nonetheless, “contain” what philosophers call (mental) content.

Non-Conceptual Content


In the context of animals, what’s motivating the idea of non-conceptual content is that animals may have “experiences” without necessarily deploying any concepts. Or as the British philosopher Martin Davies writes:
“[T]he experiences of [] certain creatures, who, arguably are not deployers of concepts at all.”
Davies continues:
“[A] creature that does not attain the full glory of conceptualised mentation, yet which enjoys conscious experience with non-conceptual content.”
All the above depends on which animal we’re talking about and what Davies means by the word “concept”. Indeed it may be difficult to fuse experience and non-conceptual content together in the first place (at least on a Kantian reading). If Davies is talking about floor lice, then what he says may well be correct. However, if he’s talking about dogs, monkeys, etc., then I’m not so certain. (The intermediary animal cases are, as ever, vague.)

Is the simple and obvious fact that animals don’t have a language (or a human language) causing this bias against animal having — or using — concepts? Davies doesn’t say. If one takes Jerry Fodor’s “language-infested” view of “mentation”, then one would probably agree with Davies. On the other hand, if one is a non-Fodorean one may question the linguistic bias of Davies’s position. (As the Canadian philosopher Paul Churchland does — see Paul Churchland on Non-Propositional Animal Thought?’.)

An even more explicit example of this linguistic bias can be seen in the work of British philosopher Christopher Peacocke. He writes:
“The representational content of a perceptual experience has to be given by a proposition, or set of propositions, which specifies the way the experience represents the world to be.”
Peacocke (in the above) is distinguishing representational content (which is propositionally specifiable) from — pure - unconceptualised sensations. And again, later, Peacocke displays his linguistic (or sentential) bias when he writes the following:
“The content of an experience is to be distinguished from the content of a judgement caused by the experience.”
Not only do we have a (sorta) dualism of “sensations” (the “contents of experience”) and “judgement” (which is “caused by the experience”), we also have the specifically intellectualist — and probably linguistic — position on “judgement”. What Peacocke means by the word “judgement” is the application of a “proposition or set of propositions” to a “perceptual experience”.

So if “representational content” is “given by a proposition”, then the implication is that the same is true of concepts. Indeed Peacocke states his own position explicitly here:
“[W]e need a threefold distinction [of experience] between sensation, perception, and judgement.”
Peacocke (in one of his notes) quotes another philosopher to back-up his case. He quotes:
“… ‘[S]ensation, taken by itself, implies neither the conception nor the belief of any external object…Perception implies an immediate conviction and belief of something external’…”

An Evolutionary Argument


If animals (or if certain types of animals) are non-conceptual creatures, then we human beings too (on a evolutionary perspective) might have — or did — started off as non-conceptual creatures. That is, pure phenomenal consciousness is common to both humans and animals. However, for later humans, phenomenal consciousness (or sensations) began to be conceptualised. (See Churchland again — here.)

There is a dualism (yes, another one) here between phenomenal consciousness and conceptual consciousness. If we’re being good evolutionary theorists in accepting that we share phenomenal consciousness with animals, then why can’t we be equally good evolutionary theorists by accepting that — some — animals have concepts too?

Why should concepts be sentence-shaped? (The English philosopher P.F. Strawson once complained of facts — or bits of the world — being “sentence-shaped objects”.)
(1) “perceptual content is the same kind of content as the content of judgement and belief”
and, alternatively,
(2) “perceptual content is a distinct kind of content, different from belief content”.
Passage (1) is very Davidsonian in that judgements/beliefs and perceptual content are seen to be as one. Passage (2), on the other hand, gives us “uninterpreted” mental content, separate (we may say) from “all schemes and science”. Of course we shouldn’t really use the word “science” or even “schemes”. Instead, we can simply say: separate from all concepts.

Ned Block’s Phenomenal Consciousness


The American philosopher Ned Block also makes his own distinction between “representation” and “intentional representation” (note 4, 1995). He argues that that an animal has an experience that is “representational”. However, that experience is not an “intentional representation”. This is how Block makes his distinction:
1) intentional representation = “representation under concepts” 2) representation = “representation without any concepts”
Block’s mistake here seems to be obvious. The mistake is found in Block’s quoted words in the following:
The animal in question “doesn’t possess the concept of a donut or a torus”.
We can accept that. However, the animal may “represent space as being filled in a donut-like way”. Again, that’s acceptable. So, yes, this animal won’t have our concept [donut] or our concept [torus]. (It certainly won’t have our word “donut”.) However, it may have its own concept [C] of the donut and likewise its own concept [C] of the torus. That is why Block allows the animal its own representations. (The animal “represents space as being filled in a donut-like way without any concepts”.) However, the animal’s experience has “representational content without intentional content”. Apparently, the animal has representations because its experience is of something “donut-like”. However, it isn’t intentional — or conceptual — simply because it doesn’t have our concept [donut]. Yet surely it may have its own concept of the thing we call a “donut”.

Block’s position seems wrong. It displays both a linguistic bias and basis for all concepts. And therefore excludes — by definition! — all animals from having conceptual content of the said experience. Yet, logically, this stance would mean that a fellow human being without the concept [donut] (or the word “donut”) would only have a representation of the donut, not an intentional representation of it.

To give more (Kantian) detail.

Even someone who does have the concept [donut] must have previously experienced a donut under other concepts. (That is, before he applied the concept [donut] to donuts.) And not just the basic Kantian concept [object] or [thing]. (These are atomic concepts which are the building blocks of later concepts.) In other words, before the object we call a “donut” fell under the concept [donut], and after it fell under the concept [object] or [thing], other concepts would have been applied — or “belonged to” — the donut. For example, perhaps the concepts [white thing], [round thing], [small round thing], etc. Even an animal without the concept [white thing] etc. would still have its own alternative non-linguistic (or non-human) alternatives to our concepts of a donut.

There’s also a problem with Block’s use of the term “representation”. Can’t a being only represent something as something? That is, the concept [C] is a representation of something. Therefore one needs a concept (not necessarily linguistic) of that something; as well as a concept of that thing as a something.

The problem here may be accounted for by what Block himself says (again in note 4). He states that “phenomenal-consciousness isn’t an intentional property”. I agree. He also says that “P-conscious content cannot be reduced to or identified with intentional content”. Again, I agree. Block also qualifies these distinctions by saying that “intentional differences can make a P-conscious difference”. He also says that “P-consciousness is often representational”. However, Block is still hinting at something which is problematic when it comes to animals: that PC (phenomenal consciousness) can always exist without intentional or representational (that is, conceptual) content. The distinctions he makes are possibly real and worthwhile. However, PC is like a finger which can’t exist without a hand. And the hand, in this case, is conceptual content (or concepts). Of course a finger is distinct from a hand; though — as yet - I haven’t seen a functioning finger without a hand.

No comments:

Post a Comment