Friday, 6 March 2026

Pruning the Universe’s Endless Fine-Tunings

When it comes to the universe’s fine-tunings, the more time goes by, the more examples are (to use Martin Rees’s word) “seen”. In fact, it seems that there are innumerable fine-tunings. This fact alone leads some people to believe in God or “intelligent design”. Other people may come to believe that precisely because fine-tunings are so commonplace, then perhaps we shouldn’t read so much into them in the first place. One way to prune these innumerable fine-tunings is to differentiate “first class” from “second class” fine-tunings, which is attempted in this essay.

Press enter or click to view image in full size
Image by ChatGPT

“Wherever physicists look, they see examples of fine‑tuning.”

— Sir Martin Rees (See here.)

“Imagine a puddle waking up one morning and thinking, ‘This is an interesting world I find myself in — an interesting hole I find myself in — fits me rather neatly, doesn’t it? It must have been made to have me in it!’”

— Douglas Adams (See here.)

The English physicist and writer Paul Davies (in his book The Cosmic Blueprintwrote that the

universe looks as if it is unfolding according to some plan or blueprint”.

The problem here is that if the laws and initial conditions determine the constraints and limits of everything that occurs in the universe, then they could well be seen as the cosmic blueprint… Thus, this effectively means that every physicist believes in the cosmic blueprint! Of course, plans are devised by human minds or by God’s mind. This is the additional part of the package which most physicists do not accept.

So physicists do often sound like believers in Intelligent Design when discussing fine-tuning. However, they state: “Our theory is incomplete.” More relevantly, they don’t talk about choice or purpose.

God has just been mentioned

One point that Paul Davies repeatedly makes is that unlike the argument of Creationists, Intelligent Designers and others who say that God must have designed the fantastic complexity and structural fit of the world, he believes that much of this can be explained without even mentioning the Judeo-Christian God. Instead, it’s the laws and initial conditions themselves which need to be explained.

First Class and Second Class Fine-Tunings

First Class Fine-Tunings:

fundamental constants, particle masses, couplings, the cosmological constant, dimensionality, initial conditions, etc.

Davies often asks: What of the initial conditions or laws themselves? He answers in the following way:

“We must either accept them as truly amazing brute facts, or seek a deeper explanation.”

Needless to say, Davies doesn’t accept them as being brute facts. He attempts to offer a deeper explanation of them. (Davies’s use of the term “brute fact” can be questioned here.)

Second Class Fine-Tunings:

Galaxy formation, star lifetimes, chemistry, planets, biochemistry, etc.

The claim here isn’t that the second class of “fine-tunings” are arbitrary or irrelevant outcomes of the first class. The second class is, however, constrained by the first class. Small changes in the first class can have big consequences when it comes to the second class.

So why the interest, and even concentration upon, the second class of “fine-tunings” by so many people? Such people emphasise the “improbabilities” which occur in the second class. But they’re to be expected. Improbable things happen all the time in large, contingent systems.

More relevantly, if something is highly improbable, then that doesn’t mean it is fine-tuned.

Let’s now focus on a second-class “fine-tuning” that’s become a favourite.

Fred Hoyle clearly recognised that the carbon resonance looked highly specific. His own phrasing was that it seems to be a “a put-up job”. He also said that it’s “as if a super-intellect had monkeyed with physics”.

Thus, Hoyle acknowledged the appearance of fine-tuning, and he took it seriously. However, he didn’t treat that appearance as final or ultimate.

Let’s return to Paul Davies

First-Class Fine-Tunings and Evolution

Davies believes in evolution for both the animal kingdom and for the universe itself. However,

“[w]hen it comes to the laws of physics and the initial cosmological conditions [ ] there is no ensemble of competitors”.

In simple terms, there is no evolution when it comes to the laws of physics and the initial conditions. These things are what are required to allow all future evolutionary processes to begin. In evolutionary-speak, the laws didn’t need to compete with… anything. They were (or are) given. They are given, but Davies believes that they still should be explained.

Davies argues that once the laws and initial conditions are in place, then there’s almost no limit to the complexity and structural fit that can follow. In Davies’s own words:

“Unlike mechanisms, which can slowly evolve to more complex or organised forms over time, the ‘crossword’ of particle physics comes ready- made. The links do not evolve, they are simply there, in the underlying laws.”

Davies puts this in another simpler way when he tells his readers that “[t]he input is the cosmic initial conditions, and the output is organized complexity, or depth”.

Yet another way of putting this is the following: despite the various and numerous manifestations of complexity and structural fit, they occur in the way they did (or do) because the “crossword” was already “simply there”. The manifestations may be interesting on their own terms. However, the initial conditions and laws are even more interesting (at least to many fundamental physicists and cosmologists).

Yet there is a problem here.

Once the base parameters are fixed, you can generate an arbitrarily long list of “If were slightly different, then wouldn’t occur” statements. It may well be the case that if X were slightly different, then Y wouldn’t occur. But have we moved to X being fine-tuned here?

It seems obvious that if X were different, then what follows from X would be different too.

Davies’s Unspoken Example

Davies himself points out a distinction between the first class and the second class of fine-tunings without saying that’s his purpose. He writes:

“It is particularly striking how processes that occur on a microscopic scale — say, in nuclear physics — seem to be fine-tuned to produce interesting and varied effects on a much larger scale — for example, in astrophysics.”

The underlying point is that the second class of “fine-tunings” is downstream of the first class.

The point here is that just because the first class is required for the second class (in this case, conditions and states in nuclear physics and things which occur in astrophysics), then that doesn’t mean that anything that happens in the second class must include fine-tunings too. Even if it can be said that certain things occur and (partially) have the physical nature that they do because of the fine-tunings of the first class, then that still doesn’t mean that the second class itself includes its own fine-tunings.

So now it will help readers to see what example Davies himself gives of this. He continues:

“Thus we find that the force of gravity combined with the thermodynamical and mechanical properties of hydrogen gas are such as to create large numbers of balls of gas. These balls are large enough to trigger nuclear reactions, but not so large as to collapse rapidly into black holes. In this way, stable stars are born. Many large stars die in spectacular fashion by exploding as so-called supernovae.”

Here we move from the small scale of the force of gravity and the thermodynamical and mechanical properties of hydrogen gas to the large scale of stars and black holes. It’s here that we can distinguish between the first class of genuine fine-tunings to what occurs downstream of that class.

Davies provides his own example. In The Mind of God, he wrote:

“Suppose it could be demonstrated that life would be impossible unless the ratio of the mass of the electron to that of the proton was within 0.00000000001 percent of some completely independent number — say, one hundred times the ratio of the densities of water and mercury at 18 degrees centigrade (64.4 degrees Fahrenheit).”

The electron-proton mass ratio (which, sure enough, directly affects chemistry and atomic structure) seems to be mentioned a lot in these debates. (Davies himself mentioned the electron-proton mass ratio directly above.) Yet isn’t it smuggling in yet another “miracle”, which is actually a consequence of the miracle of the values of the proton and electron?

So what about the electron itself and its values?

Take the following two conditionals:

If an electron were to “lose” its any of its properties of mass, charge and spin,

then it wouldn’t be an electron at all.

More relevantly and with a little more detail:

If an electron didn’t have a charge of -1, a mass of 9.109389 × 10 −31 kg and spin,

then it wouldn’t be an electron.

In other words, if the electron has precise properties/values, and the proton has precise properties/values, then the ratios between them must be equally precise too. More clearly, if the electron wouldn’t be the particle it is without having its precise values, then if the electron-proton mass ratio were at a different value, then we wouldn’t actually be talking about the electron at all. The only way the ratio could change is if the electron became a non-electron. (This would be a new particle that could exist in a “toy universe”.)

So, here again, people are being profligate with their “surprises”. Yet the mass ratio isn’t a surprise. It strictly follows from the nature and values of the electron and proton taken individually. Sure enough, we can be surprised by the values of protons and electrons too. The mass ratio itself isn’t really “miraculous”. It follows from the precise masses of the electron and proton. Once they’re “in place”, then the ratio is determined.

Davies himself asks a question about the supposedly “fishy” nature of these and other (what he calls) “coincidences”. He asks what makes them intrinsically improbable. For example:

“From what range might the value of, say, the strength of the nuclear force (which fixes the position of the Hoyle resonances, for example) be selected? If the range is infinite, then any finite range of values might be considered to have zero probability of being selected. But then we should be equally surprised however weakly the requirements for life constrain those values.”

In simple terms, what are we comparing the (actual) strength of the nuclear force to? What criteria determine the status of its values? If the nuclear force could (or might) have had any value in an infinite range of possible values, then each possible selection would “have zero probability of being selected”. But then Davies states something which I’ve noted many times myself. Any alternative to the fine-tunings would cause equal surprise. In Davies’s example, any alternative that still allows for life would cause as much surprise as the actual value of the nuclear force.

The problem here is that this is an example (fine-tuning for life) from the second class of fine tunings. And that’s why other values work too. Such things cannot be said of the first class of fine-tunings.

Note:

Excitement and surprise about improbabilities reminds me of something Richard Feynman once said:

“You know, the most amazing thing happened to me tonight. I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing!”

This passage from Feynman can be reformulated as this simple question:

Of all the millions of license plates in the state, why did I see that particular one tonight?

This question isn’t in exactly the same ballpark as talk of fine-tunings. However, it is on the borders of that ballpark.

To spell it out. It’s not weird or improbable that Feynman should have seen that particular number plate. So it may not be such a deep mystery that laws or constants have the values which they do have. Moreover, perhaps there’s no deep answer — other than mundane facts about probabilities — to the question as to why Feynman should have seen that number plate when he did. Similarly, with the values of particles, constants, etc. In other words, beyond the fact that these things are the way they are, there may be nothing more to say.

Friday, 27 February 2026

AI Machines, Emotion and the Singularity

 

In the following essay it will be assumed that there will be some kind of (technological) singularity in the future. Of course, this is much debated. However, that debate won’t matter too much within the following context of the relevance of emotions when it comes to AI machines and a possible singularity.

Image from Wiki Commons. Source here.

Ultraintelligent and Docile Machines?

The Singularity is “the proposed point in time at which machines become more intelligent than humans”.

According to many people, the true significance of the Singularity was captured by the British mathematician Irving John Good way back in 1965. He wrote:

“The first ultraintelligent machine is the last invention that man need ever make [ ].”

Is this a positive or negative proclamation on Good’s part?

It depends on your values and beliefs.

Does it follow that even if we were to create ultraintelligent machines, that man need never invent anything again? Not really. Of course, Jack Good might well have meant that men need not invent anything after this event. However, even this isn’t clear because it depends on what Good believed were the concrete consequences of the existence of ultraintelligent machines. After all, one positive consequence may be that despite the designation “ultraintelligent”, such machines still (as it were) feel the need to work with human beings. A negative consequence may be that machines actually stop human beings from inventing anything.

In any case, I purposely left out the final clause from the Good quotation above. The full quote is the following:

“The first ultraintelligent machine is the last invention that man need ever make — *provided that the machine is docile enough to tell us to keep it under control*.”

It’s the content of that last clause which worries so many people.

Firstly, it needs to asked here why Good used the word “machine” in the singular? Why didn’t he refer to “machines” in the plural? It’s odd to believe that a single machine, even if ultraintelligent, could make human invention redundant. It’s a lot less odd if Good were talking about machines in the plural. So perhaps he simply meant that once we have a single ultraintelligent machine, then soon after that we’d have many more ultraintelligent machines too. And that certainly follows.

After all, one important factor in this debate is the ability of intelligent systems to copy themselves. That really does open the floodgates. That’s because if machines can copy themselves, then they can improve themselves too. Thus, these systems no longer need to rely on human beings when it comes to such improvements. (This is already the case in certain instances.) Does this mean that the improvements will come faster? Many believe that it does.

Some sceptics, as well as some AI evangelists, may argue that the juxtaposition of docility and ultraintelligence is (almost?) a contradiction in terms. How can anything that’s ultraintelligent be docile too? (There are many intelligent human beings who are docile.)

Some may conclude, as Good himself did, that this ultraintelligent machine must be programmed to be docile. This raises an obvious question: Why would a ultraintelligent machine abide by such programming in all circumstances? It can be supposed that, ultimately, this is a technical question. It’s certainly a hard question to answer.

Intelligence?

One small problem here is that firstly we need to define the word “intelligence”. That said, however we define that word, the possibility of the Singularity is still with us. It can be supposed that if “emotional intelligence” is part of the package of intelligence, then there may be problems. (See later section.) Yet I doubt that would make much of an impact on the Singularity either.

In any case, AI machines are already more intelligent than humans in various respects. In which respects?… Yes, it’s here that we must manoeuvre back to the question of defining the word “intelligence” again. So let’s completely forget about defining “intelligence”…

AIs have direct access to more data than human beings. They often have quicker reasoning skills. They’re often better and quicker at constructing sentences. They can solve mathematical problems quicker than most human beings. Etc.

Indeed, many of these realities hint at the Singularity too.

AI Machines, Emotion and the Singularity

Earlier on “emotional intelligence” was mentioned. Many would argue that so far artificial intelligence has been “all about logicality”, not emotional or social intelligence. It’s hard to shoehorn in a discussion on emotional intelligence here. However, even without emotional intelligence, the Singularity may still occur. So it can be asked if the Singularity occurs without AI machines instantiating emotional intelligence, then would that automatically be a bad thing?

The thing is, aren’t human beings emotional in many different — sometimes contradictory — ways? Aren’t there varying degrees of human emotional intelligence? Added to that is the fact that emotion is often fused with belief and values. It rarely comes free. The basic upshot, then, is that when it comes to human beings, emotion isn’t always a good thing. So why would it automatically be a bad thing if AI machines didn’t have emotional intelligence or even emotions pure and simple?

One could even argue that a lack of emotion in AI machines is a positive in that it’s doubtful that without emotion, such machines would feel the need to wipe out human kind. It’s certainly the case that all human examples of genocide, mass murder, torture, etc. have been at least largely the result of human emotion. So rather than emotion curtailing such things, they often actually “encourage” them. All that said, who’s to say that annihilation can’t be a purely rational choice?

This is a good time to bring up Star Trek’s Spock, who was half human and half Vulcan.

Vulcans are often deemed to be “purely logical” beings, whereas humans are creatures of emotion. Yet it’s clear, according to the writers and “experts”, that Vulcans aren’t purely logical at all. Instead, they’re deemed to control or even “supress” their emotions.

AI Machines, Vulcans and Mass Murder

In a conversation with a chatbot, Spock’s adherence to the Vulcan quasi-utilitarian precept that “the needs of the many outweigh the needs of the few” was mentioned. I raised the possibility that “purely logical” Vulcans could become mass murderers. The chatbot agreed when it stated the following:

“Unlike humans, who might hesitate due to empathy or moral qualms, a fully logical Vulcan lacks emotional barriers to extreme actions. If mass murder aligns with their calculated optimal outcome (e.g., preventing a war or resource depletion), they could pursue it without guilt.”

There is some agreement with this position on utilitarianism and mass murder. For example, Laurie Calhoun (in her article ‘KILLING, LETTING DIE, AND THE ALLEGED NECESSITY OF MILITARY INTERVENTION’) wrote:

“Consistent utilitarians are ready and willing even to kill innocent people, if necessary. [ ] If more people will die if one does nothing than if one goes to war, then, in this view, one is morally obliged to go to war.”

To state what should be obvious, moral qualms and even empathy haven’t necessarily got in the way of extreme actions when it comes to human beings. Indeed, moral qualms can actually lead to extreme actions. As for human empathy, isn’t it often very selective?

Thus, perhaps the limitations placed on AI machines and Vulcans by logic and rationality are stronger than the limitations placed on human beings by their moral qualms and empathy.

As for Vulcans or AI machines logically concluding that exterminating a hostile species is preferable to prolonged conflict, then how many times in human history have (emotional) human beings exterminated hostile forces, cultures and communities on the pretext that the alternatives to doing so were even worse? Although in these cases utilitarianism was never mentioned, there was still a kind of utilitarian logic that at least partially underlined such exterminations.

If we return to the chatbot. It went into scary (as well as rather predictable) self-referential territory when it (as it were) admitted that an AI machine

“might similarly justify extreme actions if its algorithm calculates that killing many saves more (e.g., a hypothetical AI managing resources during a crisis)”.

Again, the obvious point here is that human rationality, logic and emotion have led to mass murder and many other extreme actions too.