The American philosopher Daniel Dennett, who died in 2024, embraced all sorts of different fakes. He embraced fake wine, fake Cézanne paintings, fake intelligence, fake human beings and fake consciousness. Dennett did so because he judged such fakes behaviourally and in terms of our responses to them. So just as with the Turing test, a machine’s intelligence is basically tested by its ability to answer certain questions in a certain amount of time, so the test of a fake wine or Cezanne is its ability to “fool experts”. Importantly, it simply doesn’t matter what all these fakes are made out of: what matters is what they can do.

[Opening note: It’s worth pointing out that Daniel Dennett spent a lot of time considering conscious robots (i.e., rather that the broader notion of artificial consciousness) in his ‘Cog: Steps Towards Consciousness in Robots’. In fact, over half of this paper is spent on that subject. Dennett believed that artificial consciousness will be a stronger possibility if we tie it to three-dimensional robots. That is, what robots can do, their artificial limbs and organs, and their histories. However, all that is ignored in my own essay.]
Protecting the Human Mind From Science and AI

Daniel Dennett set the historical scene by telling his readers that
“over the centuries, every *other* phenomenon of initially ‘supernatural’ mysteriousness has succumbed to an uncontroversial explanation within the commodious folds of physical science”.
Dennett believed that the same holds true for consciousness. (Alternatively put, perhaps it will, one day, hold true for consciousness.) After all, “Why should consciousness be any exception?” Moreover:
“Why should the brain be the only complex physical object in the universe to have an interface with another realm of being?”
Dennett then got all psychological — and why not? — when he stated that he
“suspect[s] that dualism would never be seriously considered if there weren’t such a strong undercurrent of desire to protect the mind from science, by supposing it composed of a stuff that is in principle uninvestigable by the methods of the physical science”.
Yes, Dennett argued that this view of consciousness has a lot to do with Cartesian dualism, which, in turn, had a lot to do with the Christian notion of the human soul. Yet as soon as we divorce consciousness from both dualism and the soul, Dennett’s question — “Why should consciousness be any exception?” — becomes all the more powerful.
On a similar personal, up-to-date and everyday level, this is something I came across in various discussions about chatbots and AI voices. I too detected a strong undercurrent of desire to protect the human mind and person not from science, but from AI. (The parallel isn’t exact here, but it’s still in the same ballpark.)
Let’s start with chatbots.
If a powerful AI can (to use a popular word from YouTube) “destroy” any essay, article or paper (or all essays, articles and papers) written by a human person, then where would that leave human authority and creativity? Say that a Large Language Model tackles a single paper — or even 100 papers in a row. It will do so by tackling literally all of the arguments, concepts, data, etc. contained within it. How should philosophers react to this? How does their work retain its worth and meaning under such an AI barrage?
More broadly, LLM’s epic critiques could even erode philosophical foundations and leave only mathematics and logic on safe ground.
Relevantly, the idea is that that sanctity of human intelligence has to be protected from LLMs.
Now for AI voices.
A while ago, I decided to transfer my Medium essays to YouTube. I used decent AI voices to narrate them. (I did this for various reasons.) However, on YouTube, almost all the comments were negative, and nearly all focused exclusively on my use of AI voices. Not a single critic commented on the quality of the AI voices. Instead, the critics all had a problem with my use of AI itself. Indeed, even if the voices were perfect, I suspect that they’d still have had a problem with me using AI voices.
In these cases, the idea is that the sanctity of the human voice has to be protected from AI fake voices.
Now let’s discuss Daniel Dennett on different types of fake.
Dennett on Fake Wine
In the following passage, Dennett, in a loose sense, made the Turing test more mundane and concrete. He wrote:
“Perfect imitation Chateau Plonque is exactly as good a wine as the real thing, counterfeit though it is [ ] if it is really indistinguishable by experts.”
It can be said here that Chateau Plonque being “exactly as good a wine as the real thing” isn’t the same as saying it’s identical to the real thing. It’s not identical in terms of its chemical and physical makeup. However, in terms of the behavioural responses of wine experts, it is identical. These experts can’t tell it apart from what it imitates. In other words, wine experts behave and respond in the same way as they’d do if they drank the genuine article.
What about verification (as in Dennett’s verificationism)?
Again, without a chemical analysis, there’s no way of knowing that Chateau Plonque isn’t the real thing. It looks the same. And the wine experts act the same in response to its taste. (According to Dennett, if there’s nothing to enable us to fund out about different qualia, then there aren’t different qualia.)
As hinted at a moment ago: what of the physical makeup of this fake wine? In the passage above, Dennett ignored it. In other words, to him, it didn’t matter that the two wines differed in terms of their makeup.
A Fake Cézanne and a Fake Film
Dennett says something similar about a “fake Cézanne”. In this case, instead of the responses of wine experts, we have the responses of art critics. The physical constitutions of the fake and the genuine articles aren’t even discussed by Dennett. To him, they’re irrelevant.
Now for Dennett’s other example: a Disney cartoon. According to Dennett,
“Walt Disney might once have proclaimed that his studio of animators could create a film so realistic that no one would be able to tell that it was a cartoon, not a live action film.”
Yet again, Dennett stressed behavioural responses in that “no one would be able to tell that it was a cartoon” (i.e., just as no one could tell it was a fake wine or a fake Cézanne). Similarly, there’s no reason why such a realistic film can’t be done in the future. That said, Dennett did acknowledge that “what Disney couldn’t do in fact, animators still cannot do”… Yes, you can hear the but coming, “but perhaps only for the time being”.
Yet even in the future it may still be the case that “[p]erhaps no cartoon could be a great film, but they are certainly real films — and some are indeed good films”. (Here Dennett would have needed to say why a cartoon film couldn’t be a great film. Perhaps the reasons for that aren’t really relevant in this precise context.)
Fake Human Beings
Dennett then got more personal with his final example. He continued:
“[A]n atom-for atom duplicate of a human being, an artificial counterfeit of you, let us say, might not *legally* be you, and hence might not be entitled to your belongings, or deserve your punishments, but the suggestion that such a being would not be a feeling, conscious, alive *person* as genuine as any born of woman is preposterous nonsense [ ].”
Why shouldn’t an “atom-for atom duplicate of a human being” not be that very human being? Is it because there are now two such human beings? Why should that matter? Alternatively, is it because there was “something, I know not what”, which was left out during the duplication? In other words, even though everything physical and structural was duplicated, something was still left out…
What was left out? The soul? The immaterial mind?
Dennett’s bottom line is that this duplicate simply must “be a feeling, conscious, alive person as genuine as any born of woman”. Thus, in order to question this conclusion, one would need to embrace the soul or the immaterial mind. (At least Dennett himself hinted at this.) And that, of course, would bring about more philosophical problems than those brought about by accepting that this duplicate is not only a human being, but the same human being!
Practicalities
Dennett on the Possibility of Fake Brains
Most of the problems with artificial wines, artificial Cezanne paintings, and artificial intelligence are practical and technological in nature. In terms of the brain, Dennett told his readers that “[t]here is no reason at all to believe that some one part of the brain is utterly irreplaceable by prosthesis”. Yet Dennett did “allow that some crudity, some loss of function, is to be expected in most substitutions of the simple for the complex”… And if one part of the brain is substitutable, then why not all parts of the brain? Why not the entire brain in one go?
Dennett took this positive view largely because other artificial parts of the body are already on the market, and have been used many times. Take artificial heart valves. They
“work really very well, but they are orders of magnitude simpler than organic heart valves born of woman or sow”. Then we have artificial ears and eyes “that will do a serviceable (if crude) job of substituting for lost perceptual organs”.
Of course, heart valves, ears and eyes are much simpler than brains. Yet Dennett has already stressed crudity and some loss of function. This raises the question as to what needs to be kept (i.e., what structures, and what functions) in order for a eye or ear to still be an eye or ear, or for a brain to still be a brain. (We can look to the animal kingdom for clues about that.)
In terms of the brain and consciousness . We may get some “crude, cheesy, second-rate, artificial consciousness”. Yet, Dennett adds, we “still win”. Again, Dennett didn’t run away with with himself. He admitted that “it is not a foregone conclusion that even this modest goal is reachable”. Still, Dennett raised a fundamental philosophical question for the sceptics:
“[Do you have] a defensible reason for claiming that no conscious robot will ever be created [?]”
In the context of Dennett’s own words and arguments (as well as the words and arguments of many other people), it does seem incredible that anyone would believe that no conscious robot will ever be created.
Daniel Dennett and John Searle
Dennett similarly reigned himself in when he stated that
“it could turn out after all that organic materials were needed to make a conscious robot”.
This is a similar position to that which the American philosopher John Searle advanced (i.e., despite the general view of his position on this subject).
Searle’s position on artificial consciousness was primarily based on what he called the “causal powers” of the biological (human) brain.
This is Searle himself on this subject:
“Some people suppose that I am claiming that it is in principle impossible for silicon chips to duplicate the causal powers of the brain. That is not my argument. [ ] It is a factual question, not to be settled on purely philosophical or a priori grounds, whether or not the causal powers of neurons can be duplicated in some other material, such a silicon chips, vacuum tubes, transistors, beer cans, or some quite unknown chemical substances.”
At the end of the quote above, Searle moved on from talking about causal powers to a hint at his Chinese room argument/s. So to repeat that passage:
“The point of my argument is that you cannot duplicate the causal powers of the brain solely in virtue of instantiating a computer program, because the computer program has to be defined purely formally.”
[These passages can be found in Searle’s paper ‘Minds and Brains Without Programs’ — not to be confused with his well-known ‘Minds, Brains and Programs’.)
The problem here is to determine what exactly Searle meant by the words “causal powers”. Readers will also need to know about the precise relation between such causal powers and consciousness.
One hint seems to be that the brain’s causal powers are over and above what is computable and/or programmable. Alternatively, perhaps it’s just an argument that, at the present moment of time (in terms of technology), these complex causal powers aren’t replicable.
In any case, Searle has never argued that artificial consciousness is outrightly impossible. Instead, he stressed the obvious fact that so far consciousness has been tied to organic materials. Searle did go further than Dennett when he stressed causal powers as being very important, and, correspondingly, that the causal powers that are important may only occur in brains.
No comments:
Post a Comment