The critics of Strong AI don't need to take an absolute position against it. They may simply have problem with, for example, the particular arguments of a particular philosopher or commentator. In other words, they can also (at least in theory) accept Strong AI and reject someone else's arguments for it.
Thus critics of Strong AI don't need to believe that non-biological systems “will never be genuinely intelligent” or even that they'll never have consciousness or minds.
So you can still adopt a position which has nothing strongly to do with any biological-artificial dichotomy.
Let's put it this way. There are many natural/biological things which don't display intelligence (though that will depend on definitions) and which don't have (or display) any form of consciousness. On the other hand, there are many non-biological (or "artificial") things which do display (or have) intelligence. This means that there's no necessary or absolute link between the natural/biological and intelligence or between the artificial and a lack of intelligence. The same may even be true of mind, consciousness and experience.
This means that no one needs to adopt a dualist position. That is, one needn't have a problem with intelligent non-biological systems at all. However, there's also an unwarranted leap that's often made from artificial intelligence to artificial minds and certainly to artificial consciousness. However, the questioning of these leaps isn't directly connected to any bias towards “carbon-based” or biological systems.
In addition, if a system displays intelligence or "acts intelligently", then one could also argue that, by definition, it must also actually be intelligent. And that brings us to intelligence as dealt with in a more abstract way.
Many people conflate intelligence with consciousness, mind, experience and subjectivity. Thus one can be critical of some claims in AI and not have any problem at all with admitting that computers, robots, etc. are intelligent. The problem is when consciousness, mind and experience are added to the pot. Alternatively, the problem may be when people deem intelligence to be equal/identical to consciousness, mind, experience and subjectivity.
This means that it can be argued that computers (or non-biological complex systems) are already intelligent.
When it comes to intelligence (unlike experience, understanding, consciousness, etc.), there can be no appearance-reality distinction. That is, if a complex system displays intelligence (or acts intelligently), then it must be intelligent. However, that may not also be true of consciousness, experience, understanding and even mind. For example, a complex system may act as if it is conscious; though it may not be so.
Thus, as it is, one doesn't need to be that sceptical (if that's the right word) about the intelligence of complex non-biological systems. However, one can (still) be sceptical about computer minds and computer consciousness.
Let's move back to intelligence.
If a computer wins human beings at chess, then it is intelligent. Full stop. Surely? This means that you can adopt a "behaviourist" position on intelligence; though not on mind or consciousness. In other words, there may be more to a mind than simply displaying intelligence or even being intelligent. As for consciousness or experience, that's something else entirely.
Of course I would need to defend this position that there is, in fact, an intelligence-mind(consciousness) dichotomy. And this position is also complicated by the simple fact that many people define these words in different ways.
Some people also argue that because “a computer is programmed to be intelligent” (as it's often put), then it can't be “genuinely intelligent”. But that does't follow. Or, more correctly, it doesn't automatically follow.
Human beings (especially very young human beings) are also programmed - if in a loose sense! They're fed a language and information; and then they use both. Sure, they show a certain degree of flexibility - even at a very young age. Then again, so too do some computers.
In terms of the human “flexibility” again. Of course many philosophers question agency (or "free will") when it comes to human beings too. They also doubt that human beings are genuinely autonomous.
In any case, there are computers which can correct themselves. There are also computers which can go in directions which go beyond the programmes which run them.
In terms of winning games of chess against human beings (or solving mathematical problems which people haven't solved), isn't this an example of going beyond the programming? That is, even if they are “a result of the programming”, aren't they still going beyond the programming? As stated, when human beings go beyond the “programming” (however loosely this word is defined or whether I need to use scare quotes), isn't that going beyond also a result of the previous programming?
Embodied and Embedded Computers
Experience and consciousness have been mentioned. However, this isn't only a question of computers having (or not having) consciousness or experience. It's also about the importance of experience when it comes to (genuine?) intelligence. More accurately, it's about how experience may be necessary in order to have (genuine?) intelligence; rather than the possibility that intelligence must always come along with experience.
This is something which people involved in AI (including Marvin Minsky) have noted since the 1960s. One important problem was mainly down to computers not being embodied and then embedded within environments. However, computers within robots are both embodied and embedded within environments. Indeed some computer-robots also have “artificial organs” which function as "sensory receptors". However, would they be real sensory receptors? That is, isn't it the case that in order for sensory receptors to be sensory receptors that they would also need to be linked to real experiences or to consciousness itself?