This is a follow up to my essay, ‘Do AI Entities Display Free Will? The Arguments Against Can Be Used Against Humans Too’. The notion of free will is both strongly and weakly tied to the nature vs nurture debate; which, in turn, is tied to the debate about what it is to be a person. The spectre of determinism is, as ever, hovering above all these debates too. Thus, as with my first essay, much of what is said about AI entities in respect to determinism can also be said about human persons.

In the following essay, I’m not going to kid readers that the two cases of AI entities and human persons are exactly the same. For example, when it comes to human persons, the balance between nature and nurture is very messy, unpredictable, and probabilistic. Moreover, the environments of AI entities are controlled and delimited… Yet this is often true of human persons too.
(The nature-nurture ratio is rarely 50/50 for the individual. However, oddly enough, on average and across the studied populations, it does come to that ratio.)
The biggest difference is, of course, that human persons are biological entities, and AI entities aren’t. How much does that matter? That may depend on the precise issue, question and philosophical concern.
As with human persons, with AI entities we have an interplay of nature and nurture.
Human persons adapt. So too do AI entities. And both kinds of entity are constrained by their architecture — biological or otherwise.
Again, the nature vs nurture debate usually refers to biological human beings. AI entities aren’t biological human beings. Does that matter? Not if the reader is a functionalist of some kind. After all, it’s at least possible that an AI entity could be identical to a human person in nearly all respects… save its material makeup...
And what of material makeup? This is the American philosopher William G. Lycan offering his own colourful case:
“[There are] two differences between Harry and ourselves: his *origin* (a laboratory is not a proper mother), and the *chemical composition of his anatomy*, if his creator has used silicon instead of carbon, for example. To exclude him from our community for either or both of *those* reasons seems to me to be a clear case of racial or ethnic prejudice (literally) and nothing more. I see no obvious way in which either a creature’s origin or its subneuroanatomical chemical composition should matter to its psychological processes or any aspect of its mentality.”
In more detail. AI entities don’t have genes. Does that rule them out from having the “nature” side of the nature-nurture binary? Not automatically. That’s unless we read the word “nature” too as referring exclusively to biological entities. In terms of the word “nature” and its relation to the word “natural”, then AI entities aren’t natural either. But then we could go back to a functionalist take on these matters.
We could expand this biocentrism and argue that culture and environment (as in the “nurture” part of the binary) aren’t applicable to AI entities either… Yes, you guessed it, the functionalist take applies here too. So take this seemingly extreme exposition of functionalism as offered up by the American philosopher Hilary Putnam in 1975:
“[S]uppose that the souls in the soul world are functionally isomorphic to the brains in the brain world. Is there any more sense to attaching importance to this difference than to the difference between copper wires and some other wires in the computer? Does it matter that the soul people have, so to speak, immaterial brains, and that the brain people have material souls? What matters is the common structure, the theory T, of which we are, alas, in deep ignorance, and not hardware, be it ever so ethereal.”
(Elsewhere, Putnam wrote: “We could be made of Swiss cheese and it wouldn’t matter.”)
Here, Putnam stressed functions, whereas Lycan earlier focussed on material constitution. Both philosophers basically came at the same issue from two different angles.
Of course, some philosophers and scientists have argued that biology does matter. They also stress that it’s not all about functions. (Examples of such philosophers and scientists include John Searle, Roger Penrose and Patricia Churchland.)
AI Entities and Their Nature
It can be argued, perhaps only analogically, that AI entities have a nature. In other words, they are created with an underlying architecture, an initial programming and a set of algorithms. All this can be seen as the “genetic code” of AI entities. It’s hardwired or hardcoded into them.
Take chatbots. Many are powered by large language models, with transformer architectures which act as the basis of their “training”. It can be said that such things determine their outputs.
Human persons, on the other hand, have brains, genes, physiologies, etc. Such things are the biological and material basis of the training of human persons. They determine, if not fully, the outputs (or actions) of human persons. Indeed, without brains, genes, etc. there would be no outputs, just as with some AI entities there’d be no outputs without transformer architectures, etc.
So, when it comes to LLMs, we firstly have the hardware, and then we have the training data, the continuous interactions with designers and users, and various other fine-tuning processes. Thus, despite the given architecture, AI entities still engage with designers and other human persons.
To sum up. The nurture of AI entities is dependent on the structure the designers give them, just as the nurture of human persons is dependent on biological structure. AI entities are also nurtured by dynamic inputs which don’t change the structure, and which are always dependent on — and partly determined by — it. This is true of human persons too. (Simply factor in biology.)
Data
AI entities are designed to learn from the data they’re fed. That data is “worked upon” by the architecture and the code. Human brains work on data too. Human persons have data fed to them by their parents, teachers, friends, leaders, etc. AI entities, on the other hand, have data fed to them by their designers. Of course, human persons can seek out their own (new) data. Yet so too can AI entities.
It may be assumed that human persons can “make anything” of the data which they have “inside” their brains or minds, which they then use or process. Can they do so? Perhaps that too depends entirely on the data that already exists in their brains or minds. So it depends. In a loose sense at least, AI entities can make something, if not anything, of the data they are fed too.
AI Entities and Determinism
Let’s go down the route of saying that AI entities are fully determined by their structure, programming, etc. In common terms, given the same input, AI entities produce the same output. This means that stochastic elements (e.g., random number generators) are ignored here. Some people would argue that even with adaptive algorithms AI entities still play by the deterministic book.
So what accounts for the same, say, chatbot giving a different answer, at a different time, to exactly the same question? Is that programmed too? Yet perhaps now we’re using the word “programmed” so broadly that even human persons couldn’t escape from this accusation.
There’s still a role here for what some commentators call “emergent behaviour”.
This is especially apparent when it comes to AI entities and their choices-in-scare-quotes. There is a word for this: “sourcehood”. When an AI entity is deemed to display sourcehood, then that’s because its choices (or decision-making) reflect (or “mimic”) human/animal adaptive behaviour.
AI Entities as Persons With Free Will
A radical position could be adopted here. It can be stressed how the training data and interactions (with human persons) of AI entities aren’t static or fixed. We may even argue that a particular AI entity has a personality which emerges from its architectural “plasticity”.
On a similar subject. Due to the many questions I’ve asked the chatbot Grok about Grok, it can be seen to reflect on its own nature. For example, I’ve asked Grok questions about its political bias, why it talks in a different way to different people, etc. Traditionally, such self-reflection-in-scare-quotes has been tied to the notion of a person. In other words, an entity couldn’t be a person if it can’t reflect on itself. In this case, then, personhood isn’t necessarily tied to biology. Self-reflection, and personhood, are then tied to free will. Of course, this self-reflection is said to be programmed too. Yet so is the self-reflection of human persons, if not to the same degree.
The broad take on AI entities above is in tune with compatibilist views on free will. Like Daniel Dennett (see here), we can take a midway position between those people who overstress nature, and those people who overstress nurture.
No comments:
Post a Comment