Is there intelligence without consciousness and consciousness without intelligence?
Not all the critics of Strong Artificial Intelligence need to take an absolute position against it. Some critics may simply have problem with the particular arguments of a particular philosopher or AI theorist. In other words, such people can even (at least in theory) accept Strong AI and simply reject someone else’s arguments for it.
(Bear in mind that the term “Strong AI” is interpreted in at least four very different ways — see here.)
Thus not all critics of Strong AI need believe that non-biological systems “will never be genuinely intelligent” or even that they’ll never have consciousness or minds.
So someone can still adopt a critical position on Strong AI which has nothing strongly to do with any biological-artificial dichotomy.
Let’s put it this way. There are many natural/biological entities which don’t display intelligence (though this will almost entirely depend on definitions) and which don’t have (or display) any form of consciousness. On the other hand, there are many non-biological (or artificial) things which do display (or instantiate) intelligence. This means that there’s no necessary (or absolute) link between the natural/biological and intelligence or between the artificial and a lack of intelligence…
And the same may even be true of mind, consciousness and/or experience.
All the above means that one needn’t adopt a “biocentric” position. That is, one needn’t have a problem with intelligent non-biological systems at all. That said, it must now be pointed out there’s often an unwarranted leap that’s made from artificial intelligence to artificial minds and certainly to artificial consciousness. Yet the questioning of these leaps needn’t be directly connected to any bias toward “carbon-based” or biological systems.
In addition, if a system displays intelligence or “acts intelligently”, then one can also argue — and many have done so — that, by definition, it must also actually be intelligent.
And all that brings us to intelligence itself — as dealt with in a more abstract manner.
Intelligence
Many people conflate intelligence with consciousness, mind and/or experience. This means (again) that one can be critical of some claims of Strong AI and not have any problem at all with admitting that computers, robots, etc. are intelligent. The problem is when consciousness, mind and experience are added into the pot. Alternatively, the problem may be when theorists (such as Roger Penrose) deem intelligence to actually require consciousness, mind and/or experience.
So it can be easily and strongly argued that computers (or non-biological complex systems) are already intelligent.
Yet when it comes to intelligence (unlike experience, understanding, consciousness, etc.), perhaps there can be no appearance-reality distinction. That is, if a complex system displays intelligence (or acts intelligently), then it must be intelligent. However, the same may not also be true of consciousness, experience, understanding and even the mind itself. (Bear in mind here that the word “mind” has as many definitions as the word “consciousness” — yet that point is usually only made about the latter, not the former.)
For example, a complex system (such as a computer or zombie) may act as if it is conscious; though it may not actually be conscious. Indeed the Australian philosopher David Chalmers has spent his entire career making this point. (See Chalmers on zombies here.)
So, as it is, one doesn’t need to be that sceptical about the intelligence of complex non-biological systems. That said, one can (still) be sceptical about computer minds and computer consciousness.
Let’s move back to intelligence.
If a computer wins human beings at chess, then it is intelligent. Full stop… Surely that’s the case? And that means that someone can adopt a behaviourist position on intelligence; though perhaps not also on mind or consciousness.
Of course I would need to defend the position that there is, in fact, an intelligence-consciousness dichotomy. And this position is also complicated by the simple fact that many people define these words in very different ways.
Programming
Some people also argue that because a (as it’s often put) “computer is simply programmed to be intelligent”, then it can’t be “genuinely intelligent” at all. Yet that doesn’t follow. Or, more correctly, it doesn’t automatically follow.
Human beings (especially very young children) are also programmed! That is, human beings — not only young children — are fed a language, information, facts, etc; and they then use all these things in various and many different ways. Sure, human persons also show a certain degree of flexibility — even at a very young age. That said, so too do some — even many — computers.
Of course many philosophers and scientists also question agency (or “free will”) when it comes to human beings too! That is, they doubt that human beings are genuinely autonomous or free from “determining causes”.
In any case, there are computers which can correct themselves. There are also computers which can go in directions which go beyond the programmes which run them or which they “follow”.
In terms of winning games of chess against human persons or solving mathematical problems which people haven’t solved: isn’t this an example of going beyond the programming? That is, even such achievements are “a result of the programming”, aren’t they still examples of going beyond the programming? As stated, when human beings go beyond the “programming” (however loosely this word is defined or whether I need to use scare quotes), isn’t that going beyond also a result of the previous programming?
Embodied and Embedded Computers
Experience and consciousness have been mentioned.
All this isn’t only a question of computers having (or not having) consciousness or experience. It’s also about the importance of experience when it comes to intelligence. More accurately, it’s about how experience may be necessary in order to have (what Roger Penrose calls) “genuine intelligence”; rather than the possibility that intelligence must always come along with experience or consciousness.
This is something which people involved in AI (including Marvin Minsky) have noted since the 1960s.
One important problem was mainly down to computers not being embodied or embedded within environments. However, computers can be both embodied within robots and then embedded within physical environments. Indeed some computer-robots also have “artificial organs” which function as “sensory receptors”. (Do I need to use scare quotes here?)
People may now ask if they are real sensory receptors. That is, isn’t it the case that in order for sensory receptors to be sensory receptors that they would also need to be linked to real experiences or to consciousness itself? Not really. Many scientists cite much data which shows that even single-celled organisms have sensory receptors. That said, they don’t also believe that such organisms instantiate consciousness or have (humanlike?) experiences.
No comments:
Post a Comment