Monday 16 September 2019

Physicist Lee Smolin on Intrinsic Essence... and (?) Panpsychism




i) Introduction
ii) Panpsychism
iii) Substance or Intrinsic Essence?
iv) Relationalism

Lee Smolin is a theoretical physicist with many philosophical interests and inclinations. (From Democritus to Leibniz.) This interplay between science and philosophy is played out in Smolin's writings.

So let Smolin lay his cards on the table. He writes:

"[T]here are questions that science cannot answer now but that are so clearly meaningful that sometime in the future, it is hoped, science will evolve language, concepts, and experimental techniques to address them."

The question is:

Does Smolin believe that "science cannot address" what he calls "intrinsic essence" - and thereby consciousness and qualia?

It would seem that Smolin does believe that. One may speak for Smolin here and say that although science will progress in the future, the "hard problem" (to use David Chalmers' term) will remain beyond it. So this isn't even a issue of insolubilia versus incompletability (i.e., the position which states that even though science's problems are soluble in principle, science will never be complete). No, this is a position that embraces insolubility regardless of completability.

Bearing in mind all the above, Smolin explicitly places his intrinsic essences outside the ambit of science. Or at least he does so when it comes qualia and consciousness. He writes:

"The problem of qualia, or consciousness, seems unanswerable by science because it's an aspect of the world that is not encompassed when we describe all the physical interactions among particles."

Smolin goes further:

"Neuronal processes are subject to description by physics and chemistry, but no amount of detailed description in those terms will answer the questions as to what qualia are like or explain why we perceive them."

Firstly, do all scientific accounts only "describe all the physical interactions among particles"? No - cognitive science (or linguistics, psychology, artificial intelligence, philosophy and neuroscience) isn't concerned with interactions among particles. Smolin seems to be demanding either a reductive science or no science at all. Nonetheless, on Smolin's behalf it can be said that various sciences do indeed deal with these aspects of the mind and brain: they just don't deal with qualia and consciousness. Some philosophers and scientists would beg to differ with that too.

Again, must a science of mind and brain be entirely devoted to "neuronal processes" and therefore to "physics and chemistry"? Well, there is one science that does deal with these things - neuroscience. However, does neuroscientists also deal with qualia or consciousness? Many would say 'no'... Indeed (almost) by definition 'no'.

As it is, current science probably can't answer "what qualia are like" or "why we perceive them". Of course one trick here is to deny the existence of qualia and consciousness outright (as some philosophers and scientists have done). Alternatively, one can explain consciousness (though not qualia) in terms that are amenable to third-person (scientific) evaluation and tests (as Daniel Dennett does).

(Just a pedantic note here on Smolin's wording. If one believes in qualia, then we don't "perceive them". Instead, they are “part of us”, as Roger Scruton argues.)

Panpsychism

It's quite a surprise that Smolin never mentions panpsychism. He doesn't even engage the possibility that the intrinsic essence of the mind have been tied to the intrinsic essence of inanimate objects. That is, Smolin says that we have an "internal aspect" which is the "intrinsic essense" of rocks or atoms. And we also have consciousness; which he tells us "is an aspect of the intrinsic essence of brains". As just stated, Smolin doesn't tie the intrinsic essence of the brain to the intrinsic essence of a rock or atom - apart from saying that they all have intrinsic essences. The panpsychist, of course, goes one step further than Smolin. The intrinsic essence of the brain is one and the same thing as the intrinsic essence of a rock (often talked about in terms of “phenomenal properties”). That means, of course, that the rock is conscious (or has phenomenal properties) too - if to a markedly lesser degree than the human brain.

It has just been said that Smolin doesn't tackle (or even mention) panpsychism, despite the fact that some of his positions seem very close to that philosophical position. So even though Smolin hints at insolubility for certain philosophical and scientific (hard) problems, he still has a problem with off-the-wall metaphysical theories.

So does Smolin apply the following stern points to panpsychism itself? He writes:

"It is easy to make stuff up, and the bookshelves are full of metaphysical proposals. But we want real knowledge, which means there must be a way to confirm a proposed answer. This limits us to science. If there's another route to reliable knowledge of the world besides science, I'm unlikely to take it..."

This seems like a strong expression of Smolin's naturalism. Yet, as a hinted at in this piece, Smolin's commitment to intrinsic essences (as well as what he says about qualia and consciousness) seems to clash with his naturalism. Nonetheless, there are indeed philosophers who are panpsychists (as well as philosophers who accept qualia) whom also class themselves as naturalists.

To be more specific. Are there ways to “confirm” panpsychism or intrinsic essence? In addition, if such things are beyond science, then they can't be (to use Smolin's own words) "another route to reliable knowledge of the world". Then again, panpsychists don't generally make epistemological claims. They make (speculative) metaphysical claims instead. So perhaps this is how both Smolin and pansychists escape this dilemma.

Substance or Intrinsic Essence?

Smolin, as a theoretical physicist, puts his case very bluntly:

What is the substance of the world? We think of matter as simple and inert, but we don't know anything about what matter really is. We know only how matter interacts.”

This passage partly reflects one written by the philosopher David Chalmers:

“[G]iven that physics ultimately deals in structural and dynamical properties, it seems that all physics will ever entail is more structure and dynamics, which will never entail the existence of experience.”

Certain things can be drawn out of the passage above. It can be said (as Smolin more or less does) that if “matter [is] simple and inert”, then that means that Smolin sees matter as substance. Yet matter is constituted by fermions, which are far from being “simple and inert”. That's unless Smolin is simply putting the position of the layperson.

Where we have substance, we also have Smolin's intrinsic essence. Smolin, however, says that all we have access to is “how matter interacts”. That ties in with Smolin's relationalism. It also ties in with ontic structural realismand, perhaps more importantly, what many physicists themselves believe.

To return to Smolin's quote above: he seems to argue that we only know that which “interacts”. That is, the world only becomes concrete (as it were) when there are interactions. Thus we can now say that either Smolin's substances (or things) are the vehicles of those interactions. Yet we “don't know anything about” substances or things. Does that mean that substances (or things) “must go”? Not necessarily. Take Kant's warnings regarding talking about noumena. Despite his warnings, Kant still said much about noumena. That is, the fact that they are outside experience means that they must be accommodated by reason. Can the same be said about Smolin's substances or intrinsic essences?

Indeed Smolin seems to fuse (or conflate) the terms “substance” and “intrinsic essence”.

Smolin believes that the substance of any x is also “the essence” of x. Or, more concretely, the substance of matter is “the essence of matter”. Thus Smolin moves from substance to intrinsic essence.

Are we playing games with technical terms here? Not necessarily. We can say that a substance is the essence of a given x. And x's essence can also be fleshed out in terms of its intrinsic properties. We can also see the terms “substance” and “intrinsic essence” as synonyms. Having said that, in analytic metaphysics it's no doubt the case that these terms can be firmly disentangled. The question is whether that analytic disentanglement gets us anywhere.

For example, Smolin's use of the word “substance” bears virtually no relation to Aristotle's use of that term. (Aristotle is the man when it comes to substance.) Indeed Smolin's substance doesn't even bear much of a similarity with the technical “substratum” either. But perhaps these technical nuances don't matter that much to this specific debate. After all, analytic metaphysicians probably don't know much about, say, spin networks or loop quantum gravity.

So let's go into Smolin's detail to see if we're only playing games with words here.

Smolin opposes his “essence” to “relationships and interactions”. Now surely it isn't a mere word-game to oppose relationships/interactions with substances/essences. The later are surely static and unchanging; whereas the the former evidently aren't.

Yet Smolin also seems to believe that we can forget (or even eliminate) essences (or substances) when he writes that

there's nothing real in the world apart from those properties defined by relationships and interactions”.

In The Trouble With Physics, Smolin also says that he doesn't believe that there are intrinsic properties/essences! Instead, as he puts it, “all properties are about relations between things”.

Thus if entities have intrinsic essences, then those essences will neither change over time nor can they be changed by other entities (or conditions). (Unless the entity concerned simply stops existing as the entity that it is.) According to Smolin, it's this ostensibly unchanging nature of intrinsic essences which makes them “absolute”. And that's why Smolin also uses the words “absolute properties”, by which he means:

absolute entities” = entities with intrinsic properties/essences

Yet, at least prima facie, Smolin isn't happy with this eliminitivism either. He writes:

Sometimes this idea seems compelling to me; at other times it seems absurd.”

Smolin then explains his worries:

It does neatly get rid of the question of what things really are. But does it make sense for two things to have a relation – to interact – if they are nothing intrinsically?”

An immediate reply to that would be to say that “things” may interact and have relations with other things without their also having intrinsic essences. In simple terms, can't a given x interact without having an intrinsic essence? That is, any given x can still interact even though it is itself a thing that is always changing.

Relationalism

Smolin puts what's called a “relationalist” position. Yet it's a relationalist position which also seems to embrace intrinsic essence. Smolin writes:

"We don't know what a rock really is, or an atom, or an electron. We can only observe how they interact with other things and thereby describe their relational properties."

He continues:

"Perhaps everything has external and internal aspects. The external properties are those that science can capture and describe through interactions, in terms of relationships. The internal aspect is the intrinsic essence; it is the reality that is not expressible in the language of interactions and relations."

So is it that Smolin rejects eliminativism about intrinsic essences (as well as things); as, say, ontic structural realists do?

As just stated. Smolin's position is fairly close to the contemporary metaphysical position (as usually applied to physics) of ontic structural realism. In the ontic structural realism picture (to use Smolin's own words), “it doesn't make sense to talk about” things with their own determinate (or intrinsic) properties when these things “can only be distinguished” in terms of their structures and relations to other things (within spacetime). In simple terms, the “things” of ontic structural realism can only be distinguished in terms of their mathematical structures and relations. There literally isn't anything else.

This means that ontic structural realists have it that things are mere placeholders used to plot relations and structures together.

Interestingly enough, Smolin does deny intrinsic essence to mathematics and numbers. (Perhaps that's not surprising when one bears in mind the nature of maths.) So it's worth while seeing what Smolin says (in a note) about maths:

"Notice that relationships are exactly what mathematics express. Numbers have no intrinsic essence, nor do points in space; they are defined entirely by their place in a system of numbers or points - all of whose properties have to do with their relationships to other numbers or points."

This position is pure mathematical structuralism. And the implication is that structures don't give us the whole picture when it comes to things which aren't numbers/maths - such as rocks, atoms, consciousness and qualia. In fact Smolin's words can be rewritten thus:

Atoms, rocks, brains, etc. have intrinsic essences. These objects are not defined entirely by their place in a system of relations.

It's also worth noting that although it can be said that atoms and rocks "have intrinsic essences", that isn't true of consciousness. Smolin has previously said (more or less) that consciousness and qualia equal (or are) intrinsic essences.




Monday 2 September 2019

Computerised Robots and Sensory Experiences





Word-count: 556

The philosopher John Searle often makes the point that many believers in Artificial Intelligence (AI), computational cognitive science, etc. almost entirely ignore biology. Thus (to Searle) they become contemporary Cartesians in that they make a massive distinction between functions/computations/algorithms/ and what exactly it is that instantiates these things in human beings and other animals – i.e., brains and central nervous systems!

The other thing they play down (or even ignore) is physical embodiment and the myriad interactions with the environment biological creatures experience. So even if a computer becomes embodied in a machine/robot (with arms, legs, “sensory receptors”, etc.), it's still the computations/programme that are doing all the work.

There's more to being embodied that connecting artificial arms, legs, ears, eyes, etc. to a computer or central processing unit (which could be a human being controlling knobs and leavers).

To use the Welsh science journalist (editor-in-chief of the New Scientist) Alun Anderson's words, attaching arms or fingers, ears, etc. to a computer/robot doesn't actually give rise to real “extended tactile experiences”. Basically, to have genuine minds, we need genuine tactile experiences.

In full, this is how Alun Anderson expressed one problem:

“If bodies and their interaction with brains and planning for action in the world are so central to human kinds of mind, where does that leave the chances of creating an intelligent disembodied mind inside a computer? Perhaps the Turing test will be harder than we think.”

The position above can be put simply. We don't simply have brains. What we have is organs with sensory receptors “sending messages” to brains. In terms of “tactile experiences”, those message are more or less immediate. Not only that: they're very responsive to the multitude of extreme particularities of specific environmental or bodily encounters. And then organisms can, at various time-lengths, adapt or change to such specific environmental or bodily encounters.

Of course computers-in-robots can respond to the environment. Though how does that compare to what's just been described? We can accept that it compares to a degree – but to what degree?

As it is, it can be said that robots do already interact with their environments. So what's missing? Do these robots have genuine “tactile experiences”?

What brings about genuine tactile experiences? In the case of human beings and other animals, it's basically biology. Biology is obviously missing from robots. So is biology that important? What is it about human and animal biology that adds something to that link between the environment and the brain/mind that isn't achieved by a computer/person in - or running - a robot?

Anderson ties “extended tactile experiences” to the “understanding of language”. He must believe that computers don't truly understand language (a la John Searle). True, computers physically respond to language as input and then produce output. Though that may not be genuine understanding. (Yes: “What is genuine understanding?”) In Anderson's words, computers “cannot say anything meaningful” even when there are such things as computer languages. (Say, when computers are fed natural-language information and then produce some kind of natural-language output.) As stated, the missing link here, according to Anderson, is “extended tactile experiences”. Computers - and computers embedded in robots - don't have tactile experiences and therefore they don't truly understand languages.

This is a simple (or simply-put) picture. Is Alun Anderson correct?


Tuesday 27 August 2019

An Ontology of an Electron



It is said that the electron “has the properties” of mass, charge and spin. That quirk of grammar (i.e., “has the properties”) makes it seem like the following:

i) Firstly, we have an electron,
ii) and then we also have its properties.

More clearly, this could mean that we have an electron, and only then does it acquire properties. (Much in the same way in which a person would acquire the property being sunburned or being happy.)

It’s more accurate to say that an electron is equal to its properties. Thus:

an electron = charge (−1) + mass (9.109389 × 10 −31 kg) + spin

This is the position known as the “bundle theory” and it’s usually applied to “classical” objects such as persons, cats, rocks, etc. In fact it seems that the bundle theory is more applicable to electrons than it is to, say, trees or persons.

This is just one of literally dozens of “visualisations” of the electron.

It’s partly because of all this that, in 1940, the physicist John Archibald Wheeler (1911- 2008) advanced the strange theory that all electrons may be one. That is, all electrons are literally the same electron. So this isn’t the claim that all electrons have the same properties. It’s the claim that all electrons are the same electron.

Wheeler’s reasons for advancing this theory involve details which aren’t touched upon in this piece. In simple terms, though, different electrons are actually the same electron moving backward and forward in time. Consequently, if this theory were true, then electrons (in the plural) moving backwards in time are actually positrons (which are the antimatter component of electrons — hence anti-electrons).

Yet if there is only one electron, then that electron must be what philosophers call an individual (or particular). And that possibility moves us on to the 17th-century philosopher Leibniz.

Gottfried Leibniz

Gottfried Wilhelm (von) Leibniz argued that all the properties of an object are essential to that object. (There are, of course, arguments against this position.) So surely this claim is truer of an electron than it is of, say, a tree or a person. After all:

i) If an electron were “loose” its properties of mass, charge and spin,
ii) then it wouldn’t be an electron at all.

Or to say the same thing with a little more detail:

i) If an electron didn’t have a charge of -1, a mass of 9.109389 × 10 −31 kg and spin,
ii) then it wouldn’t be an electron.

After all, these three properties are equally essential to a electron being an electron. Indeed even if it lost just one property (say spin), then it would no longer be an electron.

The problem with Leibniz’s position is that it’s only applicable to individuals (or particulars). This is how Leibniz expressed his position:

“The nature of an individual substance or of a complete being is to have a notion so complete that it is sufficient to contain and to allow us to deduce from it all the predicates of the subject to which this notion is attributed.”

That is to say that each “individual substance” has a complete “individual concept”. That complete individual concept contains all the properties of that subject. Or, in a more up-to-date jargon, all the “predicates” which are true of x are also essential to x.

The problem here is that electrons may not be individuals. That is, every electron is identical (though not numerically identical) to every other electron — save for its spatial and locational properties. (This seems to go against Wheeler’s position mentioned a moment ago.) So, on a Leibnizian reading, every electron has the same essence. Therefore every electron can’t be an individual. Boris Johnson, on the other hand, is an individual. That’s because he doesn’t share all his properties with every other person. That is, Boris Johnson’s “individual essence” (or haecceity) is not identical to, say, Swiss Tony’s individual essence.

There is another Leibnizian issue here.

According to the Principle of the Identity of Indiscernibles (PII), no two substances can be qualitatively identical yet still be (numerically) different objects. As you can see, this doesn’t work for electrons. And that’s another reason why they can’t be classed as individuals. That’s unless, as already mentioned, relational (or extrinsic) properties are included. Now, clearly, no two electrons can have the same relational (or extrinsic) properties. So, on a Leibnizian reading, how are we to treat the properties of location, etc. when it comes to electrons? Are they “pseudo-properties” which can simply be ignored? Leibniz himself believed that spatial and temporal properties are indeed genuine properties of the individual itself.

Finally, it’s partly because many laypersons (if they think about these things at all) still see electrons as not being equal (or identical) to their charge, mass, spin, etc. that they see them as also being Newtonian “hard” particles.

Electrons as Package-Deals

In quantum mechanics — and physics generally — we have the notion of a field.

In the specific case of electrons, fields and electrons are intimately connected. Indeed they’re so strongly connected that a distinction between the two hardly seems warranted.

Firstly, there’s the problem of distinguishing electrons — and other particles — from the states they “belong” to. Thus, in an example given by the philosophers James Ladyman and Don Ross (in their book Every Thing Must Go: Metaphysics Naturalised), we can interpret a given field/electron state in two ways:

i) As a two-particle state.
ii) As a single state in which two “two particles [are] interchanged”.

Since it’s difficult to decipher whether it’s a two-particle state or a single state in which two particles are interchanged, Ladyman and Ross adopt the “alternative metaphysical picture” which “abandons the idea that quantum particles are individuals”. Thus all we have are states. That means that the “positing individuals plus states that are forever inaccessible to them” is deemed to be “ontologically profligate” by Ladyman and Ross.

Ladyman and Ross then back-up the idea that states are more important than individuals (or, what’s more, that there are no individuals) by referring to David Bohm’s theory. In that theory we have the following:

“The dynamics of the theory are such that the properties, like mass, charge, and so on, normally associated with particles are in fact inherent in the quantum field and not in the particles.”

In other words, mass, charge, etc. are properties of states, not of individual electrons. However, doesn’t this position (or reality) have the consequence that a field takes over the role of an electron (or of a collection of particles) in any metaphysics of the quantum world? Thus does this also mean that everything that’s said about electrons can now be said about fields?

On Bohm’s picture ( if not Ladyman and Ross’s) “[i]t seems that the particles only have position”. Yes; surely it must be a electron (not a field) which has a position. Indeed electrons also have trajectories which account for their different positions.

To Bohm (at least according to Ladyman and Ross), “trajectories are enough to individuate particles”. Prima facie, it may seem strange that trajectories can individuate. Unless that means that each type of particle has a specific type of trajectory. Thus the type trajectory tells you the type of particle involved in that trajectory.

Nonetheless, Ladyman and Ross spot a problem with Bohm’s theory. That problem is summed up in this way:

If all we have is trajectory, then why not dispense with electrons (as individuals at least) altogether?

This is how Ladyman and Ross explain their stance on Bohm’s theory:

“We may be happy that trajectories are enough to individuate particles in Bohm theory, but what will distinguish an ‘empty’ trajectory from an ‘occupied’ one?”

Here again Ladyman and Ross are basically saying that if all we’ve got are trajectories, then let’s stick with them and eliminate electrons (as individuals) altogether.

Ladyman and Ross go into more detail on this by saying that

“[s]ince none of the physical properties ascribed to the particle will actually inhere in points of the trajectory, giving content to the claim that there is actually a ‘particle’ there would seem to require some notion of the raw stuff of the particle; in other words haecceities seem to be needed for the individuality of particles of Bohm theory too”.

If Ladyman and Ross’s physics is correct, then what they say makes sense. Positing electrons seems to run free of Occam’s razor. That is, Bohm is filling the universe’s already-existing (to mix two metaphors) ontological slums with yet more (superfluous?) entities .

One way of interpreting all this is by citing two different positions. Thus:

i) The positing of electrons as individuals which exist in and of themselves.
ii) The positing of electrons as part of a package-deal which includes fields, states, trajectories, etc. (In other words, there’s no reason to get rid of electrons completely.)

Then there’s Ladyman and Ross’s position.

iii) If there are never any electrons in “splendid isolation” (apart from fields, etc.), then why see electrons as being individuals in the first place?

Ladyman and Ross are a little more precise as to why they endorse iii) above. They make the metaphysical point that “haecceities seem to be needed for the individuality of particles of Bohm’s theory too”. In other words, in order for electrons to exist as individuals (as well as to be taken as existing as individuals), they’ll require “individual essences” in order to be individuated. However, if the nature of an electron necessarily involves fields, states, other particles, trajectories, etc., then it’s very hard (or even impossible) to make sense of the idea that an electron could have an individual — or indeed any — essence.

In simple terms, a specific electron — and any other particle — is part of a package-deal. Electrons simply can’t be individuated without reference to external, extrinsic or relational factors. Thus electrons simply aren’t individuals at all.

[I can be found on Twitter here.]

Monday 26 August 2019

Artificial Life and the Ultra-Functionalism of Christopher Langton


Word-count: 1,904

Contents:
i) Introduction
ii) John von Neumann
iii) The Creation of Artificial Life

The American computer scientist Christopher Gale Langton was born 1948. He was a founder of the field of artificial life. He coined the term “artificial life” in the late 1980s.

Langton joined the Santa Fe Institute in its early days. He left the institute in the late 1990s. Langton then gave up his work on artificial life and stopped publishing his research.

*************************************

When it came to Artificial Life (AL), Christopher G. Langton didn't hold back. In the following passage he puts the exciting case for AL:

It's going to be hard for people to accept the idea that machines can be alive as people, and that there's nothing special about our life that's not achievable by any other kind of stuff out there, if that stuff is put together in the right way. It's going to be as hard for people to accept that as it was for Galileo's contemporaries to accept the fact that Earth was not at the center of the universe.”

The important and relevant part of the passage above is:

[T]here's nothing special about our life that's not achievable by any other kind of stuff out there...”

Although the above isn't a definition of functionalism, it nonetheless has obvious and important functionalist implications.

So when it comes to both Artificial Life and Artificial Intelligence, the computer scientist Christopher Langton seems to have explicitly stated that biology doesn't matter. Yes; it of course matters to biological life and biological intelligence; though not to life and intelligence generically interpreted.

The biologist and cognitive scientist Francisco Varela put the opposite position (as it were) to Langton's when he told us that he “disagree[s] with [Langton's] reading of artificial life as being functionalist”. Varela continues:

By this I refer to his idea that the pattern is the thing. In contrast, there's the kind of biology in which there's an irreducible side to the situatedness of the organism and its history...”

What's more:

Functionalism was a great tradition in artificial intelligence; it's what early AI was all about.”

So we have specific biologies. Those specific biologies are situated in specific environments. And then we must also consider the specific histories of those specific biological organisms. So, if “early AI” was “all about” functions and nothing else, then that was surely to lave out a lot. (From a philosophical angle, we must also include externalist arguments, as well as embodiment and embeddedness – i.e., not only Varela's “situatedness”.)

The physicist J. Doyne Farmer also attempted to sum up the problematic stance which Langton held. He writes:

The demonstration of a purely logical system, existing only in an abstract mathematical world, is the goal that [Christopher Langton] and others are working towards.”

Yet we mustn't characterise Langton's position as mere chauvinism against biology and even against biological evolution. After all, despite what Verela says about situatedness, Langton was fully aware that

[a]nything that existed in nature had to behave in the context of a zillion other things out there behaving and interacting with”.

Langton also appeared to criticise (early?) AI for “effectively ignor[ing] the architecture of the brain”. That's not a good thing to do because Langton went on to say that he “think[s] the difference in architecture is crucial””. Nonetheless, the sophistication of this view is that just as functions and algorithms can be instantiated/realised in many materials, so too can different architectures.

The aspect of the brain's architecture that specifically interested Langton is that it is “dynamical” and also involves “parallel distributed systems” (which are “nonlinear”). Indeed he appears to have complimented what he calls “nature” for “tak[ing] advantage of” such things. And, by “nature”, Langton surely must have meant “biology”. (Though there are dynamical and non-linear natural systems which aren't biological.)

So the early AI workers ignored the brain's architecture; whereas Langton appeared to arguing that artificial architectures (alongside functions and algorithms) must also be created. This, then, may be a mid-way position between the purely “abstract mathematical world” of early AI and the blind simulation of biological brains and organisms.

Having said all the above, Langton shifts in his middle-ground again when he says that Artificial Life

isn't the same thing as computational biology, which primarily restricts itself to computational problems arising in the attempt to analyse biological data, such as algorithms for matching protein sequences to gene sequences”.

Langton continues by saying that

[a]rtificial life reaches far beyond computational biology. For example, AL investigates evolution by studying evolving populations of computer programs – entities that aren't even attempting to be anything like 'natural' organisms".

So Langton believes that AL theorists shouldn't “restrict[]” themselves to “biological data”, despite his earlier comments about noting the architecture of the biological brain (specifically, its parallel distributed processes, etc.). Yet again, Langton either appears to be standing in a mid-way position. Or, less likely, Langton appears to contradicting himself on the precise relation between Artificial Life and biology. 

John von Neumann

Langton cited the case of John von Neumann; who, some 50 years before his work, also attempted to create artificial life. Neumann's fundamental idea (at least according to Langton) is that “we could learn a lot even if we didn't try to model some specific existing thing”.

Now it can be said that when theorists and technologists create life (or attempt to create life), then they're only creating a replication/simulation of biological life. Von Neumann wanted to go further than this... and so too did Langton.

To sum up the opposition in clear and simple words, J. Doyne Farmer says that

Von Neumann's automaton has some of the properties of a living system, but it is still not alive”.

So if von Neumann wasn't concerned with the specific biologies of specific creatures, then what was he concerned with? According to Langton again:

Von Neumann went after the logical basis, rather than the material basis, of a biological process.”

Even though it was said (a moment ago) that Neumann and Langton weren't interested in replication, they still, nonetheless, studied “biological processes”. And functionalists are keen to say that the “material basis” simply doesn't matter. Yet if biological processes are still studied, then perhaps the philosopher Patricia Churchland's warnings to functionalists may not always be completely apt (i.e., about brain and mind). After all, she writes:

[T]he less known about the actual pumps and pulleys of the embodiment of mental life [by functionalists], the better, for the less there is to clutter up one's functionally oriented research.”

Indeed that position can be seen as the very essence of most (or all) functionalist positions. It's most technically and graphically shown in the “multiple realizability” argument in which it is said that function x can have any number of material bases and still function as function x. (The multiple realizability argument is found most often in the philosophy of mind.)

Von Neumann provided a specific example of his search for the logical bases of biological processes. Not surprisingly, since he was concerned with artificial life, he

attempt[ed] to abstract the logic of self-reproduction without trying to capture the mechanics of self-reproduction (which were not known in the late 1940s, when he started his investigations)”.

Prima facie, it's difficult to have any intuitive idea of how the word “logic” (or “logical”) is being used here. Isn't the logic of self-reproduction... well, self-reproduction? After all, without the “mechanics”, what else have we got?

It seemed, then, that the logic of self-reproduction (as well as self-replication, etc.) could be captured by an algorithm. In this case, “one could have a machine, in the sense of an algorithm, that would reproduce itself”. (Is the machine something that carries out the algorithm or is it actually the algorithm itself?) In more detail, the logic of the biological process of self-reproduction is captured in terms of genes and what the genes do. Thus genetic information has to do the following:

(1) it had to be interpreted as instructions for constructing itself or its offspring, and
(2) it had to be copied passively, without being interpreted.

Now this is von Neumann's logic of self-reproduction – and no (technical) biological knowledge was required to decipher those two very simple points. And, by “no biological knowledge”, I mean no knowledge of how information is stored in DNA. (That came later - in 1953.) Langton concluded something very fundamental from this. He wrote:

It was a far-reaching and very prescient thing to realise that one could learn something about 'real biology' by studying something that was not real biology – by trying to get at the underlying 'bio-logic' of life.”

As mentioned earlier, it may be the case that Langton over-stated his case here. After all, even he said that “biological processes” are studied – and indeed that's obviously the case. So we may have the “logical basis” of biological processes; though, evidently, biological processes aren't completely ignored. To put all that in a question:

Did von Neumann ever discover that this logic was instantiated/realised in any non-biological processes?

Earlier Francisco Verela was quoted citing the importance of “situatedness” and “history”. These two factors are obliquely mentioned by J. Doyne Farmer in the specific case or organisms and reproduction. He says:

Real organisms do more than just reproduce themselves; they also repair themselves. Real organisms survive in the noisy environment of the real world. Real organisms were not set in place, fully formed, hand-engineered down to the smallest detail, by a conscious God; they arose spontaneously through a process of self-organisation.”

At first sight, it seems odd that when von Neumann attempted to create artificial life and artificial evolution (or at least simulate artificial life and artificial evolution) that he seemed to have ignored “real organisms” and their surviving “in the noisy environment of the real world”. That is, Neumann's cellular automata were indeed “hand-engineered down to the smallest detail” and then “set in place”. In other words, von Neumann was the god of his own cellular automata. So no wonder Farmer sees such things in the exclusive terms of an "abstract mathematical world”.

The Creation of Artificial Life

On the one hand, there's the study of the "logic" of biological processes. On the other hand, there's the actual creation of artificial life.

The first step is to realise that logic in non-biological material. Will that automatically bring forth artificial life? Langton believed that it does (not will) – at least in some cases. That is, the simulation of life is actually the realisation (or instantiation) of life. Yet, according to Langton himself, “[m]any biologists wouldn't agree” with all this. They argue that “we're only simulating evolution”. However, Langton had an extremely radical position on this simulation-realisation binary opposition. He wrote:

[W]hat's the difference between the process of evolution in a computer and the process of evolution outside the computer?”

Then Langton explained why there's no fundamental or relevant difference. He continued:

The entities that being evolved [inside the computer] are made of different stuff, but the process is identical.”

So, again, it's the process (or the logic of the process) that's important, not the nature of the “stuff” that realises that (abstracted) process. Thus process (or function) is everything. Conversely, the material (or stuff) is irrelevant.

****************************