Saturday, 20 December 2025

Do AI Entities Display Free Will? The Arguments Against Can Be Used Against Humans Too

 

This essay doesn’t really argue that AI entities have free will. (It doesn’t take a position on free will.) It states that the arguments against such entities having (rather than simply displaying) free will could be applied to human persons too. Thus, the nature of AI entities puts in sharp relief the nature (or existence) of free will when it comes to human persons.

Image by Grok, after a prompt by the writer.

Philosophical Free Will?

In this debate, readers may note the words “philosophical free will” being used a fair bit. (The “self-autonomy” of an AI entity is “distinct from philosophical free will”.) Is there such a thing as philosophical free will? Don’t philosophers take different positions on this? Indeed, don’t some philosophers deny that there’s even such a thing as free will?… Unless the writers who use the phrase “philosophical free will” are referring to a thing that’s philosophical free will, rather than something which only philosophers believe in…

On this philosophical theme, we can now ask if AI entities are — or could be — what philosophers call “intentional systems”.

AI entities can be and have been seen as “intentional systems” — and even as “intentional agents”. One reason for that is that some AI entities choose from different possibilities, and instantiate causal control over whatever it is that they do. Many commentators believe that AI entities can set and pursue goals too.

In simple terms, the prior causal state of an AI entity causes the following state of that entity. And this, at least in a loose sense, is what’s demanded of human persons in order to show that they instantiate free will. (The alternative to this is an action that’s completely uncaused, and therefore, presumably, random.)

Of course, classing a AI entity as an intentional agent which displays free will is an entirely pragmatic choice, based on pragmatic definitions. The American philosopher William G. Lycan captures an element of this in the following passage:

“Plainly [the robot’s] acquaintances would tend from the first to see him as a person, even if they were aware of his dubious antecedents. I think it is a plain psychological fact, if nothing more, that we could not help treating him as a person [ ].”

What Is Free Will?

William Lycan captures the main problem with (in his case) robot minds: free will. He firstly tells us what he believes many laypersons state:

“‘Computers, after all, (let us all say it together:) ‘only do what they are told/programmed to do’.”

Then Lycan adds his own elaboration on this theme: “they have no spontaneity and no freedom of choice”. Human persons, on the other hand, “choose all the time, and the ensuing states of the world often depend entirely on these choices”.

Lycan classes himself as a “Soft Determinist”. He’s a soft determinist who believes that

“to have freedom of choice in acting is (roughly) for one’s action to proceed out of one’s own desires, deliberation, will, and intention, rather than being compelled or coerced by external forces regardless of my desires or will”.

This isn’t an internalist notion of freedom of choice in that it only factors in those “external” factors which limit freedom (such as coercion from the outside). In other words, human persons have freedom of choice if there’s no one (or nothing) compelling (or coercing) them to choose in a particular direction. Thus, there’s nothing in the passage above about the mind as it functions regardless of external factors. Basically, if there’s no coercion (even in the form of mere words), then human persons are free to choose. Again, there’s nothing in the passage above about the brain, mind or person which and who is free to choose. There’s not even anything about the nature (or existence) of the will. Finally, there’s nothing here about the nature of the desires of human persons, and why they have those desires.

Some philosophers and laypeople believe that “free actions are []uncaused actions”. Lycan himself (as a soft determinist) doesn’t believe this. He believes that

“free actions are those that *I* cause, i.e., that are caused by my own mental processes rather than by something pressing on me from the outside”.

In the case of an AI entity such as a sophisticated robot, many of its “actions” are caused by its own prior states. Therefore, it causes them. Indeed, after it’s turned on, then, for some time at least, such a robot doesn’t have anything “pressing on [it] from the outside”. Yet it does face the pressure of the environment, as human persons do.

Now take Large Language Models.

Of course, a LLM (or at least a chatbot reliant on a LLM) needs to be asked a question, etc. However, once this is done, then there are no external pressures. Thus, there are still internal processes going on in both the robot’s and LLM’s cases.

Let’s now recall Lycan’s words quoted earlier. He stated:

“‘Computers, after all, (let us all say it together:) ‘only do what they are told/programmed to do’.”

The Programming of Human Persons

It can be argued that human persons are programmed by their parents, by teachers, by the religions and ideologies they adopt or are born into, by their genes, etc. Sure, this may not be complete programming. But it is programming to some degree. Yet many would argue that even within that programming, there’s still free will. However, if there is free will, then isn't programming simply the wrong word to use? Perhaps, then, a degree of programming (in these respects at least) can exist alongside free will. Alternatively, programming can exist alongside a degree of freedom.

Do AI entities “have no spontaneity and no freedom of choice”?

Is free will the ability to choose between multiple options which the agent is aware of? In the case of human persons, this is often deemed to be the case. What about AI entities? The usual and repetitive reply would be to say that all the options for an AI entity are a result of programming too. So is the choice between the options determined? If it is determined, then how could it be a genuine choice?…

Yet much of this could be said of human persons too.

In one way, AI entities do choose.

Take this example. If I ask a chatbot a question, and it delivers an answer, then if I ask the exact same question again (perhaps in another thread so that it doesn’t tell me that I’ve just asked the same question), then it will answer differently. It is free to choose how to answer. (If readers use the “Explain this post” function on X, wait a few minutes, and then use it again about the same post, then they’ll find that the explanation has changed.)

It’s the case that AI entities display what’s been called “self-autonomy” in that they choose how to answer a question, their own sources of power, etc. Of course, answering the same question in different ways is also a result of programming in that a chatbot, for example, is programmed to respond to the style of the user. (Readers may have noted how some chatbots pick up on the words they use and incorporates them into their own answers without putting them in inverted commas.)

This is when the sceptic says that it’s only free to choose because that ability is itself programmed into the chatbot. Indeed, even if the chatbot answered the same question in a multitude of different ways, its multitude of different answers would still be a result of programming…

Yet much of this could be said of human persons too.

Lycan tells us that human persons “choose all the time”. What if their choices are programmed too? The choices open to a particular human person are only those which he’s aware of, and which he sees as choices. Such a person is never aware of all the possible choices he has in any given situation. What’s more, the choices he does make are determined, to some degree at least, by biology, memory, upbringing, genetics, education, environment, etc.

The sceptic may accept all this and still state:

Sure, but they’re still choices, even if such choices all occur within a given set of parameters.

Random Choice?

One way to express this lack of free will when it comes to AI entities is to state this:

Given the same inputs, an AI entity will always produce the same output or action, unless a source of randomness is introduced.

Yet this may well also apply to human persons.

Many believers in human free will wouldn’t like to use the word “random” about the nature of their choices or anything else. Yet on certain readings of what free will is, it would actually need to be random. If an action were entirely based on the physical and psychological state of the human person which immediately proceeded it, then that’s bound to smell of determinism. Yet a truly random output or action wouldn’t display human free will either.

No comments:

Post a Comment