Monday, 26 August 2019

Artificial Life and the Ultra-Functionalism of Christopher Langton

Word-count: 1,904

i) Introduction
ii) John von Neumann
iii) The Creation of Artificial Life

The American computer scientist Christopher Gale Langton was born 1948. He was a founder of the field of artificial life. He coined the term “artificial life” in the late 1980s.

Langton joined the Santa Fe Institute in its early days. He left the institute in the late 1990s. Langton then gave up his work on artificial life and stopped publishing his research.


When it came to Artificial Life (AL), Christopher G. Langton didn't hold back. In the following passage he puts the exciting case for AL:

It's going to be hard for people to accept the idea that machines can be alive as people, and that there's nothing special about our life that's not achievable by any other kind of stuff out there, if that stuff is put together in the right way. It's going to be as hard for people to accept that as it was for Galileo's contemporaries to accept the fact that Earth was not at the center of the universe.”

The important and relevant part of the passage above is:

[T]here's nothing special about our life that's not achievable by any other kind of stuff out there...”

Although the above isn't a definition of functionalism, it nonetheless has obvious and important functionalist implications.

So when it comes to both Artificial Life and Artificial Intelligence, the computer scientist Christopher Langton seems to have explicitly stated that biology doesn't matter. Yes; it of course matters to biological life and biological intelligence; though not to life and intelligence generically interpreted.

The biologist and cognitive scientist Francisco Varela put the opposite position (as it were) to Langton's when he told us that he “disagree[s] with [Langton's] reading of artificial life as being functionalist”. Varela continues:

By this I refer to his idea that the pattern is the thing. In contrast, there's the kind of biology in which there's an irreducible side to the situatedness of the organism and its history...”

What's more:

Functionalism was a great tradition in artificial intelligence; it's what early AI was all about.”

So we have specific biologies. Those specific biologies are situated in specific environments. And then we must also consider the specific histories of those specific biological organisms. So, if “early AI” was “all about” functions and nothing else, then that was surely to lave out a lot. (From a philosophical angle, we must also include externalist arguments, as well as embodiment and embeddedness – i.e., not only Varela's “situatedness”.)

The physicist J. Doyne Farmer also attempted to sum up the problematic stance which Langton held. He writes:

The demonstration of a purely logical system, existing only in an abstract mathematical world, is the goal that [Christopher Langton] and others are working towards.”

Yet we mustn't characterise Langton's position as mere chauvinism against biology and even against biological evolution. After all, despite what Verela says about situatedness, Langton was fully aware that

[a]nything that existed in nature had to behave in the context of a zillion other things out there behaving and interacting with”.

Langton also appeared to criticise (early?) AI for “effectively ignor[ing] the architecture of the brain”. That's not a good thing to do because Langton went on to say that he “think[s] the difference in architecture is crucial””. Nonetheless, the sophistication of this view is that just as functions and algorithms can be instantiated/realised in many materials, so too can different architectures.

The aspect of the brain's architecture that specifically interested Langton is that it is “dynamical” and also involves “parallel distributed systems” (which are “nonlinear”). Indeed he appears to have complimented what he calls “nature” for “tak[ing] advantage of” such things. And, by “nature”, Langton surely must have meant “biology”. (Though there are dynamical and non-linear natural systems which aren't biological.)

So the early AI workers ignored the brain's architecture; whereas Langton appeared to arguing that artificial architectures (alongside functions and algorithms) must also be created. This, then, may be a mid-way position between the purely “abstract mathematical world” of early AI and the blind simulation of biological brains and organisms.

Having said all the above, Langton shifts in his middle-ground again when he says that Artificial Life

isn't the same thing as computational biology, which primarily restricts itself to computational problems arising in the attempt to analyse biological data, such as algorithms for matching protein sequences to gene sequences”.

Langton continues by saying that

[a]rtificial life reaches far beyond computational biology. For example, AL investigates evolution by studying evolving populations of computer programs – entities that aren't even attempting to be anything like 'natural' organisms".

So Langton believes that AL theorists shouldn't “restrict[]” themselves to “biological data”, despite his earlier comments about noting the architecture of the biological brain (specifically, its parallel distributed processes, etc.). Yet again, Langton either appears to be standing in a mid-way position. Or, less likely, Langton appears to contradicting himself on the precise relation between Artificial Life and biology. 

John von Neumann

Langton cited the case of John von Neumann; who, some 50 years before his work, also attempted to create artificial life. Neumann's fundamental idea (at least according to Langton) is that “we could learn a lot even if we didn't try to model some specific existing thing”.

Now it can be said that when theorists and technologists create life (or attempt to create life), then they're only creating a replication/simulation of biological life. Von Neumann wanted to go further than this... and so too did Langton.

To sum up the opposition in clear and simple words, J. Doyne Farmer says that

Von Neumann's automaton has some of the properties of a living system, but it is still not alive”.

So if von Neumann wasn't concerned with the specific biologies of specific creatures, then what was he concerned with? According to Langton again:

Von Neumann went after the logical basis, rather than the material basis, of a biological process.”

Even though it was said (a moment ago) that Neumann and Langton weren't interested in replication, they still, nonetheless, studied “biological processes”. And functionalists are keen to say that the “material basis” simply doesn't matter. Yet if biological processes are still studied, then perhaps the philosopher Patricia Churchland's warnings to functionalists may not always be completely apt (i.e., about brain and mind). After all, she writes:

[T]he less known about the actual pumps and pulleys of the embodiment of mental life [by functionalists], the better, for the less there is to clutter up one's functionally oriented research.”

Indeed that position can be seen as the very essence of most (or all) functionalist positions. It's most technically and graphically shown in the “multiple realizability” argument in which it is said that function x can have any number of material bases and still function as function x. (The multiple realizability argument is found most often in the philosophy of mind.)

Von Neumann provided a specific example of his search for the logical bases of biological processes. Not surprisingly, since he was concerned with artificial life, he

attempt[ed] to abstract the logic of self-reproduction without trying to capture the mechanics of self-reproduction (which were not known in the late 1940s, when he started his investigations)”.

Prima facie, it's difficult to have any intuitive idea of how the word “logic” (or “logical”) is being used here. Isn't the logic of self-reproduction... well, self-reproduction? After all, without the “mechanics”, what else have we got?

It seemed, then, that the logic of self-reproduction (as well as self-replication, etc.) could be captured by an algorithm. In this case, “one could have a machine, in the sense of an algorithm, that would reproduce itself”. (Is the machine something that carries out the algorithm or is it actually the algorithm itself?) In more detail, the logic of the biological process of self-reproduction is captured in terms of genes and what the genes do. Thus genetic information has to do the following:

(1) it had to be interpreted as instructions for constructing itself or its offspring, and
(2) it had to be copied passively, without being interpreted.

Now this is von Neumann's logic of self-reproduction – and no (technical) biological knowledge was required to decipher those two very simple points. And, by “no biological knowledge”, I mean no knowledge of how information is stored in DNA. (That came later - in 1953.) Langton concluded something very fundamental from this. He wrote:

It was a far-reaching and very prescient thing to realise that one could learn something about 'real biology' by studying something that was not real biology – by trying to get at the underlying 'bio-logic' of life.”

As mentioned earlier, it may be the case that Langton over-stated his case here. After all, even he said that “biological processes” are studied – and indeed that's obviously the case. So we may have the “logical basis” of biological processes; though, evidently, biological processes aren't completely ignored. To put all that in a question:

Did von Neumann ever discover that this logic was instantiated/realised in any non-biological processes?

Earlier Francisco Verela was quoted citing the importance of “situatedness” and “history”. These two factors are obliquely mentioned by J. Doyne Farmer in the specific case or organisms and reproduction. He says:

Real organisms do more than just reproduce themselves; they also repair themselves. Real organisms survive in the noisy environment of the real world. Real organisms were not set in place, fully formed, hand-engineered down to the smallest detail, by a conscious God; they arose spontaneously through a process of self-organisation.”

At first sight, it seems odd that when von Neumann attempted to create artificial life and artificial evolution (or at least simulate artificial life and artificial evolution) that he seemed to have ignored “real organisms” and their surviving “in the noisy environment of the real world”. That is, Neumann's cellular automata were indeed “hand-engineered down to the smallest detail” and then “set in place”. In other words, von Neumann was the god of his own cellular automata. So no wonder Farmer sees such things in the exclusive terms of an "abstract mathematical world”.

The Creation of Artificial Life

On the one hand, there's the study of the "logic" of biological processes. On the other hand, there's the actual creation of artificial life.

The first step is to realise that logic in non-biological material. Will that automatically bring forth artificial life? Langton believed that it does (not will) – at least in some cases. That is, the simulation of life is actually the realisation (or instantiation) of life. Yet, according to Langton himself, “[m]any biologists wouldn't agree” with all this. They argue that “we're only simulating evolution”. However, Langton had an extremely radical position on this simulation-realisation binary opposition. He wrote:

[W]hat's the difference between the process of evolution in a computer and the process of evolution outside the computer?”

Then Langton explained why there's no fundamental or relevant difference. He continued:

The entities that being evolved [inside the computer] are made of different stuff, but the process is identical.”

So, again, it's the process (or the logic of the process) that's important, not the nature of the “stuff” that realises that (abstracted) process. Thus process (or function) is everything. Conversely, the material (or stuff) is irrelevant.


No comments:

Post a Comment