Friday, 8 June 2018

My Letter to Philosophy Now - 'Heidegger's Ways of Being'

Dear Editor,

In the Philosophy Now piece, 'Heidegger's Ways of Being', Andrew Royle claims that Martin Heidegger offered us a “direct refutation of Rene Descartes' solitary introspection”. Is that really the case?

Descartes “global scepticism” was an epistemological exercise. It had little - or nothing - to do with ontology. It was about how Descartes – or about how we - could know, and then philosophically demonstrate, that (to use Andrew Royle's own words) “the world and other people actually exist”. It wasn't even that Descartes didn't believe that the world and other people existed. Descartes' enterprise was about his knowledge of other people's existence. Indeed Descartes' initial scepticism is also what's called “methodological scepticism” (or “methodological doubt”). That is, it was supposed to be sure route to knowledge. It was a philosophical method which was designed to show us that knowledge of the world and other people is possible.

As for the Heideggerian grammar of the word 'I'.

Say, for argument's sake, that the use of the word 'I' also (as Royle puts it) “necessarily refers to... 'you' or an 'other'”. How did Descartes know that all the people he'd experienced weren't also the simulations of an “evil demon”? Thus such simulations (or mental distortions) might have also grounded Descartes' use of the word 'I'.

To put that another way. If the Matrix and “brain-in-a-vat” (Hilary Putnam) scenarios are possible, then it's equally possible that the simulations we have of other people may ground our use of the word 'I'. Indeed one can even say that a Heideggerian notion of “social Being” (or Dasein) can exist alongside Cartesian scepticism – indeed even if the Matrix and brain-in-a-vat scenarios are possible. (Putnam, of course, argues that his own scenario isn't possible – and for loosely Heideggerian reasons!)

As for solipsism. To quote Arthur Royle himself:

“Although Heidegger's argument works to abate Descartes' solipsism... Whilst the 'I' (or 'ego') was indubitably alone for Descartes...”

In everyday-life terms, Descartes would have left his doubts well behind after he'd solved (or thought he'd solved) the “sceptical problem”. (Just as Hume forgot his own scepticism when playing billiards.)

This means that Descartes most certainly wasn't a solipsist. (Though it can of course be said that he was a “methodological solipsist” for the duration of the Cogito.)

A genuine solipsist is someone who does indeed have an ontological position on what Royle calls the “I” or “ego”. What's more, a solipsist feels the reality of his solipsism throughout his life. (Or at least he does so for as long as he's a solipsist or thinks about his philosophical predicament.) Descartes, on the other hand, took a journey from his radical scepticism to a sure knowledge (or so he believed) of the world and other people. Now that's very far from being solipsism.

Not only that: solipsism has ontological and ethical implications. However, that isn't really the case when it comes to Descartes' scepticism. Having said that, it's indeed the case that certain political and sociological theorists have interpreted Descartes' scepticism as a 17th-century philosophical expression of “bourgeois individualism”. Yet even if that were true, Descartes never made this explicit. With Heidegger and solipsists, on the other hand, their ontological and ethical positions are indeed made explicit.

Paul Austin Murphy.

Saturday, 2 June 2018

James Ladyman on Structural Realism

In his Understanding Philosophy of Science), James Ladyman says that “structural realism” was “introduced” by the philosopher John Worrall. 

This position - within the philosophy of science (though mainly within the philosophy of physics) - has it that structures are fundamental. What's more, structures are real (hence the word “realism”).

At the heart of structural realism is the idea that physics essentially deals with structures, not with “things” or entities. More importantly, it is these structures which are retained in physics; not the things which physics posits. That is, such structures can be passed on from an old theory to a new theory (i.e., when both theories are - ostensibly - about the same phenomenon or problem).

Thus structural realism is a realism about structure, not about things, conditions or “empirical content”. As Ladyman puts Worrall's position:

... we should not accept full blown scientific realism, which asserts that the nature of things is correctly described by the metaphysical and physical content of our best theories. Rather, we should adopt the structural realist emphasis on the mathematical or structural content of our theories.”

More relevantly

Since there is (says Worrall) retention of structure across theory change, structuralism realism both (a) avoids the force of the pessimistic meta-induction (by not committing us to beliefs in the theory's description of the furniture of the world), and (b) does not make the success of science... seem miraculous...”

Thus structural realism has an argument against the well-known position of “pessimistic meta-induction”: i.e., such pessimism only applies to “empirical content” and “theory”, not to structure. That is, structure is retained “across theory change” and thus total inductive pessimism is unjustified.

So three questions arise here:

i) What is structure?
ii) Is structure really retained “across theory change”?
iii) And (this is related to i) above) how is structure distinguished from “empirical content” and “theory”?

Henri Poincaré's Structuralism

James Ladyman traces structural realism back to Max Plank and Henri Poincaré. For example, Ladyman quotes Plank stating the following:

... 'Thus the physical world has become progressively more and more abstract; purely formal mathematical operations play a growing part.'...”

Indeed Plank's position is one which most physicists would uphold; even if they wouldn't use Plank's precise wording. Of course this is simply an indirect acknowledgment that that there would be no physics without “mathematical operations”; or, more broadly, without mathematics itself.

Thus, in a broad sense, the structural realism position is that it's all about the maths. Or, at the least, it's all about the mathematical structures noted by physicists.

Ladyman also tells us that Poincaré

talks of the redundant theories of the past capture the 'true relations' between the 'real objects which Nature will hide for ever from our eyes'...”

So here Poincaré's response to the pessimistic meta-induction is to argue that “true relations” are retained from theory to theory; even if the things or phenomena mentioned in the theories aren't. As can also be seen, Poincaré uses different jargon to contemporary structural realists in that he talked of “true relations” rather than “structures”. Then again, structural realists also make extensive us of the word “relations”. After all, it's the structures of the physical world which account for these relations; and, in a sense, they're also constituted by such relations.

Nonetheless, Poincaré does seem to depart from contemporary structural realism when he talked about “real objects”. As can be shown, structural realists (especially ontic structural realists) dispense entirely with objects or things - “real” or otherwise. Or at least they believe that “every thingmust go”. Despite that, since Poincaré qualifies his reference to “real objects” with the clause that “Nature will hide [these real objects] for ever from our eyes”, then it can be said that Poincaré – effectively - did indeed dispense with objects/things too. That is, when Poincaré used the phrase “for ever from our eyes”, presumably he wasn't only talking about literal visual (or observational) contact with real objects. He must have also meant any kind of contact with them – including (as it were) theoretical contact. Thus Poincaré's real objects were little more than Kantian noumena and therefore of little use in physics. Then again, since Poincaré was also a Kantian, noumena might well have had a role to play in his metaphysics and physics.

So was Poincare a Kantian and a structural realist at one and the same time?

Just as Poincaré used the words “true relations” instead of the word “structure”, so the philosopher of science Howard Stein uses the word “Forms”. That is, Stein says (as quoted by Ladyman) that

our science comes closest to comprehending 'the real', not in its account of 'substances' and their kinds, but in its account of the 'Forms' which phenomena 'imitate' (for 'Forms' read 'theoretical structures', for 'imitate', 'are represented by'”.

Clearly Stein is attempting to tie contemporary structural realism to a long philosophical - and indeed Platonic - tradition. He does so with his use of the words “Forms” (with a platonic 'H') and “imitate”. Then again, he also rejects the equally venerable (i.e., in the history of philosophy) “substances” and “kinds”.

Having said that, this very same passage can be read as expressing the position that Forms (or “theoretical structures”) are actually imitating (or “representing”) “substances and their kinds”. So, as with Kantian noumena, it's not that substances don't exist: it's that our only access to them is through theoretical structures: i.e., through the mathematical structures and models of physics. If this reading of Stein is correct, then that makes his position almost eliminativist. As with Kant's noumena, Poincaré's “real objects” and ontic structural realism's “things”, aren't Stein's “substances” also (to use Wittgenstein's words) “idle wheels in the mechanism”? What purpose do they serve? Do they serve as a abstract Kantian “grounding” or as a Lockean “I know not what”? Or, to quote Wittgenstein again, perhaps it's best to conclude: “Whereof one cannot speak thereof one must be silent.”

Examples from Physics

Maxwell and Fresnel

James Ladyman cites John Worrall's example of the structural elements of Augustin-Jean Fresnel's theory (of light waves) passing over to James Clerk Maxwell's later theory. Ladyman quotes Worrall thus:

... 'There was an important element of continuity in the shift from Fresnel to Maxwell.'... ”

More relevantly, this

'was much more than a simple question of carrying over the successful empirical content into the new theory'...”

However, neither was it just a case of “carrying over or the full theoretical content or full theoretical mechanisms”.

Thus, if it's not just a case of “empirical content” and “theoretical content” being “carried over”, then what else was also carried over? The answer to this is: structure. That is, Fresnel's theory shares a certain structure with Maxwell's later theory. Or as Worrall himself puts it:

... 'There was continuity or accumulation in the shift, but the continuity is one of form or structure, not of content.'...”

Clearly Worrall doesn't see Fresnel's and Maxwell's theories as only being (what's often called) “empirically equivalent”. He states that it's not (only) “empirical contents” which are passed on. That must mean that the two theories are also theoretically “under-determined” by the empirical content.

This means that Fresnel's and Maxwell's theories are neither empirically nor theoretically identical. So does that mean that these two theories are structurally (or formerly) identical instead? Worrall may not also believe in complete structural identity between these two (or any) separate theories. However, he clearly does believe that structural identity is more important (or more substantive) than any empirical or theoretical identity.

Thus it follows from all this that we'll now need to know how it is, precisely, that structure is distinguished form both empirical and theoretical content when it comes to the theories of Fresnel and Maxwell – and indeed when it comes to any comparative theories in physics.

Newton & Quantum Mechanics

Worrall also attributes a structuralist position (if not an explicit acceptance of structuralism) to Issac Newton. Worrall describes Newton's structuralist reality (if not his position) in the following manner:

... 'On the structural realist view, what Newton really discovered are the relationships between phenomena expressed in the mathematical equations of his theory.'... ”

In certain respects, this is certainly true. For example, it's often and justifiably stated that quantum mechanics wouldn't so much as exist without its mathematical descriptions and predictions. John Horgan, for one, states that

mathematics helps physicists definite what is otherwise undefinable. A quark is a purely mathematical construct. It has no meaning apart from its mathematical definition. The properties of quarks – charm, colour, strangeness – are mathematical properties that have no analogue in the macroscopic world we inhabit.”

Isn't all the above just as true of much of Newton's work? However, it's certainly the case that Newton wasn't an eliminativist when it came to things/objects (or when it came to Poincaré's “real objects”). Despite that, it was still “mathematical equations” which captured the things or phenomena Newton was accounting for in his theories.

The question (as with quantum mechanics) is:

Is there any remainder after the mathematics (or mathematical structure) is taken away?

What's left? Kantian noumena or, well, literally nothing? Of course it's hard to defend an eliminativist position when it comes to Newton and the concrete things he was talking about (e.g., stars, the moon, etc.). However, eliminativism seems much more appealing and justified when it comes to the micro world of quantum mechanics. In this realm, everything really does seem to be mathematical. Quite simply, there are no genuine equivalents to the moon, stars and even gravity (at least Newtonian gravity) in the quantum realm. In other words, our only access to the micro world is through mathematics. Clearly, that can't also be said of the world as described by Newton.

Friday, 18 May 2018

Searle & Patricia Churchland on Dualism in AI and Cognitive Science

John Searle accuses many of those who accuse him of being a “dualist” of being... dualists. Bearing in mind the philosophical ideas discussed in the following, his stance isn't a surprise.

Searle's basic position on this is:

i) If Strong AI proponents, computationalists or functionalists, etc. ignore or play down the physical biology of brains; and, instead, focus exclusively on syntax, computations and functions (the form/role rather than the physical embodiment),
ii) then that will surely lead to some kind of dualism in which non-physical abstractions basically play the role of Descartes' non-physical and “non-extended” mind.

Or to use Searle's own words:

"If mental operations consist of computational operations on formal symbols, it follows that they have no interesting connection with the brain, and the only connection would be that the brain just happens to be one of the indefinitely many types of machines capable of instantiating the program. This form of dualism is not the traditional Cartesian variety that claims that there are two sorts of substances, but it is Cartesian in the sense that it insists that what is specifically mental about the brain has no intrinsic connection with the actual properties of the brain. This underlying dualism is masked from us by the fact that AI literature contains frequent fulminations against 'dualism'.”

Searle is noting the radical disjunction created between the actual physical reality of biological brains and how these philosophers and scientists explain and account for mind, consciousness and understanding.

So Searle doesn't believe that only biological brains can give rise to minds, consciousness and understanding. Searle's position is that, at present, only biological brains do give rise to minds, consciousness and understanding. Searle is emphasising an empirical fact; though he's not denying the logical and metaphysical possibility that other things can bring forth mind, consciousness and understanding.

Searle is arguing that the biological brain is played down or even ignored by those in AI, cognitive science generally and many in the philosophy of mind. And when put that bluntly, it seems like an almost perfect description of dualism. Or, at the very least, it seems like a stance (or position) which would help advance a (non-Cartesian) dualist philosophy of mind.

Yet because those people just referred to (who're involved in artificial intelligence, cognitive science generally and the philosophy of mind) aren't committed to what used to be called a “Cartesian ego” (they don't even mention it), then the charge of “dualism” seems – superficially! - to be unwarranted. However, someone can be a dualist without being a Cartesian dualist. Or, more accurately, someone can be a dualist without that someone positing some kind of non-material substance formerly known as the Cartesian ego. However, just as the Cartesian ego is non-material, non-extended (or non-spatial) and perhaps also abstract; so too are the computations and the (as Searle puts it) “computational operations on formal symbols” which are much loved by those involved in AI, cognitive science and whatnot.

Churchland on Functionalism as Dualism

Unlike Searle, Patricia Churchland doesn't actually use the word “dualist” for her opponents; though she does say the following:

Many philosophers who are materialists to the extent that they doubt the existence of soul-stuff nonetheless believe that psychology ought to be essentially autonomous from neuroscience, and that neuroscience will not contribute significantly to our understanding of perception, language use, thinking, problem solving, and (more generally) cognition.”

Put in Churchland's way, it seems like an extreme position. Basically, how could “materialists” (when it comes to the philosophy of mind and psychology) possibly ignore the brain? 

It's one thing to say that 

“psychology is distinct from neuroscience”. 

It's another thing to say that psychology is 

“autonomous from neuroscience” 

and that 

“neuroscience will not contribute significantly to our understanding” of cognition. 

Sure, the division of labour idea is a good thing. However, to see the “autonomous” in “autonomous science” as being about complete and total independence is surely a bad idea. In fact it's almost like a physicist stressing the independence of physics from mathematics.

Churchland thinks that biology matters. In this she has the support of many others. 

For example, the Nobel laureate Gerald Edelman says that the mind

can only be understood from a biological standpoint, not through physics or computer science or other approaches that ignore the structure of the brain”.

In addition, you perhaps wouldn't ordinarily see Patricia Churchland and John Searle as being bedfellows; though in this issue they are. So it's worth quoting a long passage from Searle which neatly sums up some of the problems with non-biological theories of mind. He writes:

I believe we are now at a point where we can address this problem as a biological problem [of consciousness] like any other. For decades research has been impeded by two mistaken views: first, that consciousness is just a special sort of computer program, a special software in the hardware of the brain; and second that consciousness was just a matter of information processing. The right sort of information processing -- or on some views any sort of information processing --- would be sufficient to guarantee consciousness..... it is important to remind ourselves how profoundly anti-biological these views are. On these views brains do not really matter. We just happen to be implemented in brains, but any hardware that could carry the program or process the information would do just as well. I believe, on the contrary, that understanding the nature of consciousness crucially requires understanding how brain processes cause and realize consciousness.. ” 

In a sense, then, if one says that biology matters, one is also saying that functions aren't everything (though not that functions are nothing). Indeed Churchland takes this position to its logical conclusion when she more or less argues that in order to build an artificial brain one would not only need to replicate its functions: one would also need to replicate everything physical about it.

Here again she has the backup of Searle. He writes:

Perhaps when we understand how brains do that, we can build conscious artifacts using some non-biological materials that duplicate, and not merely simulate, the causal powers that brains have. But first we need to understand how brains do it.”

Of course it can now be said that we can have an artificial mind without having an artificial brain. Nonetheless, isn't it precisely this position which many dispute (perhaps Churchland does too)?

In any case, to use Churchland's own words on this subject, she says that

it may be that if we had a complete cognitive neurobiology we would find that to build a computer with the same capacities as the human brain, we had to use as structural elements things that behaved very like neurons”.

Churchland continues by saying that

the artificial units would have to have both action potentials and graded potentials, and a full repertoire of synaptic modifiability, dendritic growth, and so forth”.

It gets even less promising for functionalism when Churchland says that

for all we know now, to mimic nervous plasticity efficiently, we might have to mimic very closely even certain subcellular structures”.

Put that way, Churchland makes it sound as if an artificial mind (if not artificial intelligence) is still a pipe-dream.

Readers may also have noted that Churchland was only talking about the biology of neurons, not the biology of the brain as a whole. However, wouldn't the replication of the brain (as a whole) make this whole artificial-mind endeavor even more complex and difficult?

In any case, Churchland sums up this immense problem by saying that

we simply do not know at what level of organisation one can assume that the physical implementation can vary but the capacities will remain the same”.

That's an argument which says that it's wrong to accept the implementation-function “binary opposition” (to use a phrase from Jacques Derrida) in the first place. Though that's not to say - and Churchland doesn't say - that it's wrong to concentrate on functions or cognition generally. It's just wrong to completely ignore the “physical implementation”. Or, as Churchland says at the beginning of one paper, it's wrong to “ignore neuroscience” and focus entirely on function.

Churchland puts the icing on the cake herself by stressing function. Or, more correctly, she stresses the functional levels which are often ignored by functionalists.

Take the cell or neuron. Churchland writes that

even at the level of cellular research, one can view the cell as being the functional unit with a certain input-output profile, as having a specifiable dynamics, and as having a structural implementation in certain proteins and other subcellular structures”.

Basically, what's being said here is that in many ways what happens at the macro level of the mind-brain (in terms of inputs and outputs) also has an analogue at the cellular level. In other words, functionalists are concentrating on the higher levels at the expense of the lower levels.

Another way of putting this is to say what Churchland herself argues: that neuroscientists aren't ignoring functions at all. They are, instead, tackling biological functions, rather than abstract cognitive functions.

Thursday, 17 May 2018

Searle on Artificial Intelligence (AI) and the Brain's Causal Powers

John Searle's position on artificial consciousness and  artificial understanding is primarily based on what he calls the “causal powers” of the biological (human) brain.

This is Searle himself on this subject:

Some people suppose that I am claiming that it is in principle impossible for silicon chips to duplicate the causal powers of the brain. That is not my argument... It is a factual question, not to be settled on purely philosophical or a priori grounds, whether or not the causal powers of neurons can be duplicated in some other material, such a silicon chips, vacuum tubes, transistors, beer cans, or some quite unknown chemical substances. The point of my argument is that you cannot duplicate the causal powers of the brain solely in virtue of instantiating a computer program, because the computer program has to be defined purely formally.”

[This passage can be found in Searle's paper 'Minds and Brains Without Programs' – not to be confused with his well-known 'Minds, Brains and Programs'.)

It does seem quite incredible that a fair few of those involved in artificial intelligence (AI) and cognitive science generally completely downplay - or even ignore - brains and biology. This is especially the case when one bears in mind that biological brains are the only things - at present! - which display consciousness and experience. Nonetheless, when it comes to intelligence, it can be said that computers already do indeed display intelligence or even that they are intelligent. (Though this too is rejected by many people.)

So Searle simply believes that there has to be some kind of strong link between biology and mind, consciousness and understanding in the simple sense that - at this moment in time - only biological systems have minds, consciousness and understanding. (Of course the question “How do we know this?” can always be asked here.) Thus there's also a strong link between biological brains and complex (to use Searle's words) “causal powers”. However, this doesn't automatically mean that mind, consciousness and understanding must necessarily be tied to biology. It just means that, at this moment in time, it is so tied. And that tight and strong link between biology and consciousness, mind and understanding is itself linked to the requisite complex causal powers which are only instantiated - so far! - by complex biological brains.

Thus Searle's talk about causal powers refers to the argument that a certain level of complexity is required to bring about the causal powers which are required for mind, consciousness and understanding (as well as for “intentionality” and semantics, in Searle's writings).

To repeat. Searle never argues that biological brains are the only things capable - in principle - of bringing about minds, consciousness and understanding. He says that biological brains are the only things known which are complex enough to do so. That means that it really is all about the biological, physical and causal complexity of brains.

Causal Powers?

The problem here is to determine what exactly Searle means by the words “causal powers”. We also need to know about the precise relation between such causal powers and consciousness, understanding and, indeed, intelligence.

One hint seems to be that the brain's causal powers seem to be over and above what is computable and/or programmable. Alternatively, perhaps it's just an argument that, at the present moment of time (in terms of technology), these complex causal powers are not programmable or computable.

Indeed at the end of the quote above, Searle moves on from talking about causal powers to a hint at his Chinese Room argument/s. So to repeat that passage:

The point of my argument is that you cannot duplicate the causal powers of the brain solely in virtue of instantiating a computer program, because the computer program has to be defined purely formally.”

The argument here is that something physical is required in addition to an abstract “computer program” and the computations/algorithms/rules/etc contained within it. And that something physical also happens to be biological – i.e., the brain. In other words, computer programmes are “purely formal[]”. Brains, on the other hand, are both biological and physical. Thus even if programmes or computations capture the “form” or syntax (as it were) of a brain's computations and even of its physical structure/s, they still don't capture its biological physicality. That is, they don't replicate the brain's causal powers.

However, if programmes were instantiated in non-biological physical constructions, then we could - at least in principle - replicate (or capture) both the forms/syntax/computational nature of the biological brain and also its causal powers. It's just that no physical computer (which runs abstract programmes) at present does replicate (or capture) the complex causal powers of the biological brain.

Saturday, 12 May 2018

Strong AI: Does Intelligence Always Come Along with Consciousness or Mind?

The critics of Strong AI don't need to take an absolute position against it. They may simply have problem with, for example, the particular arguments of a particular philosopher or commentator. In other words, they can also (at least in theory) accept Strong AI and reject someone else's arguments for it.

Thus critics of Strong AI don't need to believe that non-biological systems “will never be genuinely intelligent” or even that they'll never have consciousness or minds.

So you can still adopt a position which has nothing strongly to do with any biological-artificial dichotomy.

Let's put it this way. There are many natural/biological things which don't display intelligence (though that will depend on definitions) and which don't have (or display) any form of consciousness. On the other hand, there are many non-biological (or "artificial") things which do display (or have) intelligence. This means that there's no necessary or absolute link between the natural/biological and intelligence or between the artificial and a lack of intelligence. The same may even be true of mind, consciousness and experience.

This means that no one needs to adopt a dualist position. That is, one needn't have a problem with intelligent non-biological systems at all. However, there's also an unwarranted leap that's often made from artificial intelligence to artificial minds and certainly to artificial consciousness. However, the questioning of these leaps isn't directly connected to any bias towards “carbon-based” or biological systems.

In addition, if a system displays intelligence or "acts intelligently", then one could also argue that, by definition, it must also actually be intelligent. And that brings us to intelligence as dealt with in a more abstract way.


Many people conflate intelligence with consciousness, mind, experience and subjectivity. Thus one can be critical of some claims in AI and not have any problem at all with admitting that computers, robots, etc. are intelligent. The problem is when consciousness, mind and experience are added to the pot. Alternatively, the problem may be when people deem intelligence to be equal/identical to consciousness, mind, experience and subjectivity.

This means that it can be argued that computers (or non-biological complex systems) are already intelligent.

When it comes to intelligence (unlike experience, understanding, consciousness, etc.), there can be no appearance-reality distinction. That is, if a complex system displays intelligence (or acts intelligently), then it must be intelligent. However, that may not also be true of consciousness, experience, understanding and even mind. For example, a complex system may act as if it is conscious; though it may not be so.

Thus, as it is, one doesn't need to be that sceptical (if that's the right word) about the intelligence of complex non-biological systems. However, one can (still) be sceptical about computer minds and computer consciousness.

Let's move back to intelligence.

If a computer wins human beings at chess, then it is intelligent. Full stop. Surely? This means that you can adopt a "behaviourist" position on intelligence; though not on mind or consciousness. In other words, there may be more to a mind than simply displaying intelligence or even being intelligent. As for consciousness or experience, that's something else entirely.

Of course I would need to defend this position that there is, in fact, an intelligence-mind(consciousness) dichotomy. And this position is also complicated by the simple fact that many people define these words in different ways.


Some people also argue that because “a computer is programmed to be intelligent” (as it's often put), then it can't be “genuinely intelligent”. But that does't follow. Or, more correctly, it doesn't automatically follow.

Human beings (especially very young human beings) are also programmed - if in a loose sense! They're fed a language and information; and then they use both. Sure, they show a certain degree of flexibility - even at a very young age. Then again, so too do some computers.

In terms of the human “flexibility” again. Of course many philosophers question agency (or "free will") when it comes to human beings too. They also doubt that human beings are genuinely autonomous.

In any case, there are computers which can correct themselves. There are also computers which can go in directions which go beyond the programmes which run them.

In terms of winning games of chess against human beings (or solving mathematical problems which people haven't solved), isn't this an example of going beyond the programming? That is, even if they are “a result of the programming”, aren't they still going beyond the programming? As stated, when human beings go beyond the “programming” (however loosely this word is defined or whether I need to use scare quotes), isn't that going beyond also a result of the previous programming?

Embodied and Embedded Computers

Experience and consciousness have been mentioned. However, this isn't only a question of computers having (or not having) consciousness or experience. It's also about the importance of experience when it comes to (genuine?) intelligence. More accurately, it's about how experience may be necessary in order to have (genuine?) intelligence; rather than the possibility that intelligence must always come along with experience.

This is something which people involved in AI (including Marvin Minsky) have noted since the 1960s. One important problem was mainly down to computers not being embodied and then embedded within environments. However, computers within robots are both embodied and embedded within environments. Indeed some computer-robots also have “artificial organs” which function as "sensory receptors". However, would they be real sensory receptors? That is, isn't it the case that in order for sensory receptors to be sensory receptors that they would also need to be linked to real experiences or to consciousness itself?