Sunday, 22 March 2026

Liam Kofi Bright on Analytic Philosophers Fiddling, While the World Burns

 


This short piece is a response to Liam Kofi Bright’s essay ‘The End of Analytic Philosophy. (It’s partially a response to Walter Veit’s same-titled ‘The End of Analytic Philosophy’ too.) Bright argues that young philosophers want to “change the world, not just understand it”. This is, of course, a rewriting of Marx’s well-known words: “Philosophers have only interpreted the world, the point however is to change it.” Marx didn’t mention understanding at all. He didn’t even conclude, “not just interpret the world”. Bright, on the other hand, does say, “not just understand it”. Yet I’d argue that this is a difference that effectively doesn’t make a difference when the goal of radical political change is always and forever in the driving seat.

Liam Kofi Bright
“[W]hen students ask us why they should major in something as apparently fanciful as philosophy while the world burns, we want to have something to say besides ‘stick with us, we’re pretty sure that any day now we will have a viable theory of reference magnetism’.”

Liam Kofi Bright


Liam Bright’s Own Successor Paradigm

Liam Bright talks about there being no “successor paradigm” to analytic philosophy. Some readers of his work may assume that Bright actually does have a successor paradigm in mind — his own! Yet that won’t really be a successor paradigm at all: it’ll be a complete alternative. That explains why Bright believes that no successor paradigm could replace analytic philosophy.

All this chimes in with Christoph Schuringa’s ideas — and in many ways. (Schuringa’s relevant ideas can be found in his ‘The never-ending death of analytic philosophy’.)

Schuringa has written a lot on Marx. So perhaps Marx’s own well-known words against philosophy may well sum up Schuringa’s own position on analytic philosophy. Indeed, there’s a reference to “Marx on the ‘supersession’ of philosophy” in Schuringa’s academic webpage. Now are Schuringa’s words “supersession of philosophy” basically the same as Bright’s “successor paradigm”?

Both Bright and Schuringa believe that analytic philosophy is politically flawed and politically compromised. One example, among many, is the following extract from Schuringa:

“Philosophy as a discipline has a huge whiteness problem, and it is right that the hegemony of Western philosophy in the academy must be addressed if the curriculum is to be effectively decolonized.”

The above isn’t quite as shouty as Professor van Norden’s words (as can be found in his ‘Western philosophy is racist’):

“Mainstream philosophy in the so-called West is narrow-minded, unimaginative, and even xenophobic. [ ] Academic philosophy in ‘the West’ ignores and disdains the thought traditions of China, India and Africa. This must change.”

This means that Professor Norden trumps Professor Schuringa in that he widens the target to “mainstream philosophy” and “Western philosophy” as a whole.

Perhaps all this partially explains why, according to Walter Veit, naturalised philosophy wasn’t even considered by Liam Bright. That’s because this “third kind of philosophy” will inevitably be deemed to be politically flawed and politically compromised too.

Others on the Death of Analytic Philosophy

Many of those who write titles like ‘The never-ending death of analytic philosophy’, ‘The End of Philosophy’, etc. come at this issue from an almost exclusively political angle.

Bright and Schuringa themselves aren’t former analytic philosophers who rebelled against it, but were always on the outside looking in. The following is from one of Schuringa’s early academic biographies:

“I am Assistant Professor in Philosophy at the New College of the Humanities (part of Northeastern University), and Editor of the Hegel Bulletin.[ ] My chief interests are in the history of philosophy (especially Kant, Hegel, and Marx), in the traditions of Marxism and critical theory, and in social and political thought more widely. Specific current research projects concern Marx’s critique of Hegel, the concept of Gattungswesen, and ‘tragic’ conceptions of philosophy. My large-scale current project is a monograph I am writing on Marx. [ ] I was recently in the Philosopher’s Zone [ ] talking about the history, and prospects, of analytic philosophy [ ].”

There’s not much about analytic philosophy in the biography above — save for the final bit about “talking about the history, and prospects, of analytic philosophy”. In addition, none of Schuringa’s publications, dating back to 2011, can be classed as analytic philosophy.

Similarly when one looks at Liam Bright’s publications — no analytic philosophy at all. Take these titles dating back to 2014:

‘On The Stability of Racial Capitalism’ (2025), ‘Du Boisian Leadership through Standpoint Epistemology’ (2024), ‘To Be Scientific Is To Be Communist’ (2023), ‘White Psychodrama’ (2023), ‘Causally Interpreting Intersectionality Theory’ (2016), and Bright’s first paper, ‘What is the State of Blacks in Philosophy?’ (2014).

Now there’s nothing wrong with any of these subjects being studied and written about. It’s just that they aren’t analytic philosophy. They are classic examples of academic political activism.

Neither Schuringa nor Bright are like Richard Rorty in these respects. They didn’t take umbrage at analytic philosophy after writing many — or even just some — papers within that tradition. They didn’t grow bored with analytic philosophy either, as some argue Rorty did. Instead, Schuringa and Bright probably had problems with analytic philosophy from the very beginnings of their publishing careers, if not even before that.

True of All Academics

Is there a (to use Bright’s words) “well-validated and rational-consensus-generating theory of grand topics of interest” in other types of philosophy? In all other disciplines? What’s more, is it necessary that there should be “grand topics"? Much of what Bright says about analytic philosophy can be said about other types of philosophy and other disciplines too. But he doesn’t say it about them.

Bright states the following:

“Many philosophers strike me as like Polish apparatchiks in 1983 — they turn up to work and do what they did yesterday just because they don’t know what else to do, not because they seriously believe in the system they are maintaining.”

Again, all the above can be said about all types of professional philosophers. It can be said about sociologists, physicists, political scientists, etc. too. What’s more, it can even be said about academic philosophers with strong political biases, especially if their own brands of politicised philosophy chime in well with the departments they teach at.

Bright also tells us that

“it’s not been fully appreciated how much of a blow it is to the confidence of the field’s youth that scientific ambitions are increasingly abandoned as untenable”.

Readers may wonder if Bright has done any surveys or empirical research on what “the field’s youth” believe when it comes to analytic philosophy (or its “triple failure of confidence”). However, Bright does state the following:

“My anecdotal impression is that junior philosophers are hyper aware of these bleak prospects for anything like creation of a shared scientific paradigm.”

Without examples and detail, it’s hard to know which “problems” Bright is referring to. So it can’t be known if they have been “solve[d]” or not. What’s more, who, exactly, decides whether a problem is “worth solving in the first place”?

“Now is a time of woe for analytic philosophy.”

Bright’s essay, at least the extracts selected by Walter Veit, is hyperbolic and highly general.

For a start, why should analytic philosophy be seen as (or even be) “a [single!] research programme”? And why should there be a single “shared project”? Moreover, the examples Bright gives of a shared project (“analysing key concepts or a mutual commitment to the linguistic turn”) weren’t really “projects” at all — they were ways or methods of doing philosophy.

Oddly, Bright himself acknowledges that “the lack of such shared projects in themselves didn’t really cause a problem for the field”. On this, Bright mentions Rorty’s position that “analytic philosophy is held together mainly by a certain kind of style and sociological bonds among its practitioners”.

Analytic Philosophy and Science

Veit quotes Bright saying that “[a]nalytic philosophy has long had ambitions to something like scientific status”. That statement should really be rewritten as:

Some — perhaps even many — analytic philosophers in the 1920s to the 1950s had ambitions to make their work more scientific in nature. And, yes, sometimes some philosophers who had this ambition did reach the point of “cringingly insecure self parody”.

Interestingly, this was also the case with philosophers well outside the analytic philosophy tradition, such as Karl Marx, Edmund Husserl, Jacques Lacan, Ferdinand de Saussure, Louis Althusser, Michel Foucault, Bruno Latour, etc. This is something Veit too recognises when he says that

in many ways naturalist philosophy can often be closer to the work of continental philosophers, who had naturalist leanings, such as Friedrich Nietzsche, Edmund Husserl, Maurice Merleau-Ponty, or Georges Canguilhem”.

Colin McGinn on Philosophy and Science

In a sense, Bright’s point about the “science envy” (a cliché he doesn’t actually use) which analytic philosophers supposedly suffer from is answered in an article he actually links to in his piece. In that piece for the New York Times by Colin McGinn (Name Calling: Philosophy as Ontical Science’), he writes:

“[P]hilosophy so conceived is best classified as a science, because of its rigor, technicality, universality, falsifiability, connection with other sciences, and concern with the nature of objective being (among other reasons).”

McGinn then added:

“I did not claim, however, that it is an empirical science, like physics and chemistry [ ].”

This helps show readers that the New York Times piece which Bright links to shows us that he treats analytic philosophers almost as straw targets. No philosopher has ever argued that analytic philosophy is an empirical science. In fact, Wittgenstein and many others went out of their way to say that philosophy isn’t a science. This is odd, then, because Bright has a penchant for the logical positivists, who did see their philosophies as being “scientific”. Yet this too all boils down to politics. Bright has a penchant for the logical positivists because some of them were socialists. (Otto Neurath and Rudolf Carnap were socialists, as well as some lesser names.) So some of them, as a consequence, attempted to link philosophy (or their philosophies) to politics and societal change, just like Bright himself. (The logical-positivist movement actually included “conservative right”, “radical left”,” and “liberal” wings, who all lived and worked within “Red Vienna”.)

To return to McGinn.

McGinn does state that “philosophy so conceived is best classified as a science”. Personally, I’m not sure about that. However, what matters to me is that good philosophy does employ “rigor, technicality, universality, falsifiability, connection with other sciences, and concern with the nature of objective being”… To qualify even more. I’m not even sure about all those characteristics either. McGinn has smuggled philosophical positions in with philosophical methods. (For example, “universality” and “the concern with the nature of objective being” are positions within philosophy, not ways of doing philosophy.) However, rigor, varying degrees of technicality and connections with the sciences are indeed important.

Whether all this makes such philosophy a science is another matter.

Science and Philosophical Naturalism

Walter Veit says that “[p]hilosophy in this vision is part of science”. He explains:

“Naturalist philosophers are excited about the progress enabled on old philosophical problems with the aid of the sciences, be that the mind, the nature of life, or the structure of reality.”

Veit even offers his readers some concrete examples of what philosophy could be like. He continues:

“[I] was able to work together with scientists, such as Nicola Clayton’s corvid lab, to bring us closer to answering what it is like to be a crow, my work with biologists at Oxford in measuring biological complexity, or my ongoing work on several projects together with animal welfare scientists.”

Is all that (as Bright puts it) “cringingly insecure self parody” too?

Veit also states the following:

“Some of my readers, no doubt, will already be skeptical of analytic philosophy, not because they necessarily share my naturalist view of how the field should operate, but because they have a fondness for philosophers such as Nietzsche and the like that fill popular book sections.”

I believe most of the above also goes for Bright, although not in exactly the same way. Bright has a fondness for philosophers who’re explicitly political (just like himself), regardless of how the “field of analytic philosophy should operate”.


Note:

Liam Bright has something to say about “puzzles” too:

“[W]e will, keep generating puzzles for any particular answer given, we will never persuade our colleagues who disagree, we will never finally settle what to say about the simple cases in order to be able to move on to the grand problems of philosophy.”

[Note the royal ‘we’!]

I too sometimes find some philosophical puzzles annoying. The problem here is that what many of the critics class as the “puzzles of analytic philosophy” aren’t actually puzzles in any strict sense of the word. Yet there are indeed puzzles in analytic philosophy: it’s just that critics deem many of the central subjects to be “mere puzzles” too.

Saturday, 21 March 2026

How and Why I Anthropomorphise Chatbots

 


This is a partly “confessional” essay about my own anthropomorphic tendencies when it comes to interacting with chatbots. I’ve tried to make sense of that by thinking about the history and psychology of anthropomorphism. I also cite examples of anthropomorphising, ranging from my own, a triangle perceived to be a bully, AI toys classed as “evil”, and a robot with and without a face.

Image created by ChatGPT, under the writer’s prompts.

Here are five confessional examples of my own anthropomorphic tendencies:

(1) Saying “Thanks” to a chatbot. (2) Thinking I’m tiring a Chatbot out or boring it when I go into detail. (3) Embarrassment about certain details or questions. (4) Taking a chatbots’ criticisms personally. (5) Getting annoyed with a chatbot.

There are, of course, other examples I could have given.

Now following my confessions, here are a few qualifications. When talking to a chatbot, I only feel as if I’m talking to a human person. That’s even though, every now and again, I need to (metaphorically) pinch my arm, and then tell myself that this isn’t a real person. However, I never actually believe that I’m talking to a human person. What’s the difference? For me, it seems to be easier to suspend disbelief in that it makes the conversation (the word “conversation” itself may be an anthropomorphism) easier and more satisfying.

One psychologist, Norman N. Holland, even provided a neuroscientific theory of suspension of disbelief. He argued that when a person watches a film, looks at a painting, etc. (or engages with a chatbot) the brain goes entirely into a “perceiving mode”. In other words, the brain as a whole cuts out those planning to act and judge faculties needn’t in day to day life.

The KTH AI Society and Other Sceptics

The KTH AI Society (see here) tackles the issue of anthropomorphism in the following passage:

“Cleverly written rules and human irrationality has convinced many people that the chatbots they are communicating with online are real people.”

Is this claim true?

Perhaps many people are in the same situation which I described about myself earlier. Of course, some people will believe that they’re talking to a real person. Then again, some people believe they converse with aliens or get feedback from trees.

The KTH AI Society continued with a passage that can be taken to be plain wrong:

“Even with chatbots that have no ability to learn and are clearly not what we would consider ‘intelligent’, it is surprisingly easy to trick people into believing that their conversation partner is real.”

Note the scare quotes around the word “intelligent”. Whatever we believe about intelligence, the AI which does or does not display it needn’t also be taken to be a person.

In response to the sceptics and the KTH AI Society. One interesting phenomenon that’s worth commenting on is the view that chatbots, robots, etc. can develop human-like emotions even when they haven’t been programmed to do so. Thus, to use philosophy-speak, emotions emerge from the existing programming and hardware. [See note.]

Innate Anthropomorphism

These kinds of anthropomorphic tendencies would continue even if there were a categorical proof — as if! — that AIs weren’t conscious. It can be said that they’re hardwired into us all. It’s not surprising, then, that psychologists consider anthropomorphism to be innate. So what I’m guilty of will also apply to virtually all the adult persons who interact with chatbots. (I suspect that at least some adults with vociferously deny this.)

Just think about how wide and far anthropocentrism stretches. Both in and out of fiction, we have talking animals, talking trees, and sentient toys (see later). Human persons also “personify” nations and even races.

Regardless of innateness and ancient history, it can still be said that computer theorists and programmers are partly responsible for all this anthropomorphising. After all, it was these people who introduced such terms as “reason”, “think”, “hallucinate”, “catch a virus”, “decide”, “write” and “read”, “memory”, etc. into the language which we use to talk about AI.

Despite all that, it’s worth bringing in here what may seem like a technical point. There is some disagreement about these matters among experts. For example, what I’ve described as anthropomorphism can simply be seen in terms of “predictions” about the AI’s behaviour.

Good Anthropomorphism?

What’s important to realise is that anthropomorphism is sometimes acceptable and even wise. There’s a problem here, though, in that if someone knows full well that he or she is anthropomorphising, then is he or she anthropomorphising at all?

It has to be said that some examples — if they are examples at all — of anthropomorphism are deemed to be good things. For example, people with depression, social anxiety, or other mental illnesses can interact with emotional support animals. These animals are deemed to be a useful component of treatment.

In addition, believing certain anthropomorphic things about a computer, robot or chatbot may serve various psychological and even practical purposes. It can help in terms of understanding, as with metaphors and analogies in science. On the other hand, some views about computers, robots and chatbots are simply false, and dangerous too. One of the most controversial examples of the latter are the many cases of human persons who anthropomorphise chatbots because they are lonely.

Adults, Children and Chatbots

There’s a useful distinction to make here. That involves distinguishing basic anthropomorphism, which is exemplified by children (such as ascribing human characteristics to animals, robots, flowers, trees, etc.), from the more controversial examples of ascribing more abstract human characteristics to chatbots, robots, AI generally (such as intelligence, consciousness and even emotion). Some would argue that the second set is basically the same as the first, only more abstract and general. Others would argue that these examples aren’t anthropomorphic at all.

As just stated. Some readers will be keen to draw a distinction between young children being anthropomorphic and adults being so. Children are very keen on cartoons of animals with human characteristics. However, it can be argued that even in the case of children this isn’t genuine or deep anthropomorphism. In a loose sense, aren’t most children, just like adults, simply suspending disbelief?

Experts explain the way children anthropomorphize in terms of their early socialization. In detail. When young children come across entities or animals which aren’t human, they have little alternative but to anthropomorphise. On the surface at least, this puts them at odds with adults.

Examples

The Triangle Bully

One of the most extreme forms of documented anthropomorphism occurred way back in 1944. This was a study carried out by Fritz Heider and Marianne Simmel. The experiment was very simple. The researchers showed various subjects a 2-and-a-half-minute cartoon of shapes moving around at various speeds and in different directions. The subjects were then simply asked to describe what they saw. They did so in terms of the shapes’ personalities and intentions. In terms of examples. The large triangle on the screen was described as being a “bully” because it was “chasing” the other two shapes. These latter shapes, on the other hand, were described as “tricking” the large triangle and attempting to “escape”.

The researchers interpreted the anthropomorphism in terms of the shapes’ movements having no obvious cause. In other words, the subjects took an intentional stance toward the shapes… Or did they? The intentional stance relates to the earlier discussion of suspending disbelief. It’s a predictive strategy described by Daniel Dennett. Taking up the intentional stance means that we interpret the behaviour of entities (chatbots, machines and humans) by treating them all as rational agents who and which have beliefs and desires. Why do that? This is said to simplify complex systems by assuming they’ll act to achieve goals based on their beliefs and desires. This also ties in with the earlier remarks about anthropomorphism being “predictive”.

Here’s another mundane and even laughable example. Take the case of robots carrying out childcare or driving a car. Now firstly imagine a robot with a face and name driving a car or looking after a child. Now imagine a robot without a name, voice and face doing exactly the same thing.

The Evil AI Furby

If I can refer back to a previous essay in which I discussed an AI Furby. The presenter of a YouTube programme (called ‘ChatGPT in a kids robot does exactly what experts warned’) seemed to recognise the instinct for anthropomorphism in human beings. He said:

“I know that AI is only playing a character, but it may as well be real, you know, because people can still use it like that.
“Roleplaying is just putting an AI’s capability inside a character mask.”

So let me put the frequent anthropomorphism of this video in a little context. Take these words, which are also from the presenter:

“People [in the late 1990s] claimed their Furbies were giving them secret messages and listening to them.”

This is a reference to the (non-AI) Furby of 1998, some 14 years before the “AI revolution” of 2012. This highlights the anthropomorphic bent of the human species. In this case, the paranoia is very familiar. It was brought about by many people misunderstanding the toy’s technology, high-tech anxiety, and even the “demonic” reputation the 1998 Furby developed.

In terms of examples. When the batteries of Furbies ran low, this caused their speech to deepen and slow down. That was the “demonic voice” and “death rattle”. There was also a concern that if you were “mean” to a Furby, it would “learn” to be mean back.

Conclusion

Two extremes can be cited when it comes to chatbots. On the positive side, chatbots can be used productively. On the negative side, some individuals fall in love with chatbots (or the roles they play under prompting). Arguably, even a positive use of a chatbot may include — or even require — a certain degree of anthropomorphism. This raises a point which was expressed above. If there’s a conscious (if not articulated) suspension of belief in these productive cases, is this still anthropomorphism?


Note:

Since many believe that emergence is real, then this isn’t a surprise. (Whether this is strong or weak emergence is another matter.) After all, with a vast amount of data, numerous abstract and physical operations, hardware, software, etc., then why can’t something strange or unknown happen to an AI entity which could be classed as emergent? That said, it needn’t be ontologically emergent, only epistemically so.

Monday, 16 March 2026

When Stephen Wolfram Debated Donald Hoffman

 


This essay is a response to the debate between the computer scientist and physicist Stephen Wolfram and the cognitive psychologist and popular science author Donald Hoffman. The debate was hosted by Curt Jaimungal, and it can be found on YouTube here. On the surface at least, the theories of these two men have much in common. However, on analysis, these similarities are only superficial and surface-level. Indeed, Hoffman’s idealism often clashes with Wolfram’s more practical ideas. Hoffman wants consciousness to be “outside” the rules (Wolfram’s or anyone’s), while Wolfram wants the rules to be the “source” of everything, including consciousness.

Stephen Wolfram and Donald Hoffman

It’s worth noting that Stephen Wolfram seems to have had little knowledge of Donald Hoffman before this YouTube debate. Rather, he noted that people had told him that his own work and that of Hoffman were “related”. In addition, there is no mention of Hoffman on Wolfram’s website. However, there is a bare link to this debate on YouTube. As for the debate itself, that was initiated and hosted by Curt Jaimungal, who hosts Theories of Everything (TOE) on YouTube. The two met for the first time during this recorded session in June 2024.


Let Curt Jaimungal pick up on one of the similarities between Wolfram’s work and Hoffman’s work. In the YouTube debate, he says:

“Don [Hoffman], I don’t think you’re disagreeing with what Stephen [Wolfram] just said. Stephen, what you had said is that, look, we can start with something that’s simple, mechanically simple, and then get to something that is extremely mechanically complex, such that we would never think, looking at the complex case, that it could be made of these elementary elements. And Don is saying that’s correct.”

None of this is original to either Wolfram or Hoffman. Weak and strong emergence (if that’s what they’re talking about) are commonplace subjects in both physics and the philosophy of physics.

On the surface at least, some readers may wonder why Wolfram was at all interested in seeing if Hoffman’s “conscious agents” could be mapped (or “projected”) onto his own Ruliad framework. After all, Hoffman’s position is philosophical and idealist.

Wolfram Defends Large Language Models

One of the most relevant ways in which Wolfram states that rules can generate, well, anything and everything is when he mentions Large Language Models. He says:

“You [Hoffman] have rather dismissively said that my friends the LLMs are all merely regurgitating the things that went into them. But you claim that we are not.”

In very simple terms, Hoffman’s conscious agents do the work of Wolfram’s rules. Thus, to Wolfram, consciousness is generated by rules. To Hoffman, consciousness is fundamental.

What Wolfram says about not being able to use Hoffman’s theory probably applies to all physicists too. This is how Wolfram puts it:

“Is that definition of success transportable enough that I can really apply it to an LLM? And perhaps the answer will be, the LLM is not conscious. But right now you haven’t given me anything that is concrete enough that I can take it and fit it onto the LLM and say, ‘Do you win or do you lose?’”

During this debate, Wolfram kept trying to read through Hoffman’s jargon in order to see if there is a program he can run. He couldn’t find one. Alternatively, Wolfram wanted to know how Hoffman’s conscious agents could be “translated” into code.

Readers may wonder why Wolfram would ever have thought that Hoffman’s theory would be transportable to his own work. (Perhaps he never thought that.) Similarly, readers may wonder why Hoffman would ever have thought that Wolfram’s Ruliad would be transportable to his work. More strongly, one wouldn’t think that Hoffman would have any (relevant) interest in LLMs at all, save to say that they are “icons”.

All this raises the possibility that the similarities between Wolfram’s work and Hoffman’s work are merely superficial or surface-level. Sure, both men use graph theory, and both believe that (to use Hoffman’s words) “spacetime is doomed”.

So despite all the mathematics in Hoffman’s papers, Wolfram, and I suspect most physicists, can’t use his theory. Oddly, Hoffman kind of admits this himself when he responded in the following way:

“So I owe you a mathematically precise theory of consciousness, a scientific theory of consciousness that could try to do that kind of thing. [ ] It uses Markovian dynamics in the model. And what we’re doing right now is to try to answer your question.”

Actually, Hoffman doesn’t “owe” Wolfram anything more. Doesn’t he claim to already have a “mathematically precise theory of consciousness”? Unless, that is, Hoffman simply means that Wolfram needs to actually read his papers before commenting in detail.

Hoffman’s Leibnizian Idealism

Hoffman puts his idealist (or consciousness-first) philosophy in the following:

“What I do know is that consciousness is what I know firsthand. What I call inanimate matter is an extrapolation. What’s directly available to me are experiences, conscious experiences, and what I call an unconscious physical world is an extrapolation that I’m making. What I only have are my conscious experiences. I have nothing else.”

In turn, Wolfram picks up on Hoffman’s idealism in the following passage:

“Do you believe that if I could accurately measure the electrochemistry of the nematode that I would capture the whole story? Or do you believe that there’s something that is beyond the physical that’s not capturable by any physical measurement that is something about what the nematode feels?”

Now let Wolfram put Hoffman’s position on consciousness. He states: “So the claim is that there’s a spark of consciousness that can simply not be reached mechanically.”

Hoffman explains his idealism and offers us his either/or logic:

“If we don’t assume that consciousness is fundamental in the foundations of our theories, then we either have to dismiss consciousness and say it’s not there, or we have to give a theory in terms of unconscious entities about how consciousness emerges.”

Many scientists and philosophers have indeed dismissed consciousness. (Similarly, we do have to give a theory (at least partly) in terms of unconscious entities. (Readers could embrace panpsychism!)

Hoffman claims that “it’s not logically possible to start with unconscious ingredients and to have consciousness emerge”. Why “logically impossible”? It’s here that Leibniz enters the picture.

Hoffman doesn’t only substitute Leibniz’s monads with his own conscious agents, he’s motivated by Leibniz’s claim that consciousness cannot come from “unconscious ingredients” too. In this debate, Hoffman admits that his conscious agents basically do the work of Leibniz’s monads. (This is the first time that I’ve come across Hoffman actually saying that.)

Why bring up Leibniz at all?

Well, for one, Wolfram brings Leibniz up in direct response to Hoffman’s claim of logical impossibility. (Wolfram’s own unconscious ingredients are his “simple rules”.) Wolfram says:

“If you’d asked me in 1980, do I disagree with Leibniz’s intuition? I would have said, I don’t know. I don’t know how you would get a mind-like thing to arise from a non-mind-like sort of origin. But then, by 1981, I was starting to do all kinds of computer experiments about what simple rules can actually do. And it really surprised me. In other words, what could emerge from something that seemed like it was too sterile to generate anything interesting, I was completely wrong.”

One basic point to extract from all the above is that Wolfram believes that Hoffman relies on intuition when he makes his claim about the logical impossibility of consciousness arising from non-conscious ingredients.

Hoffman Believes Modern Physics Fails. Long Live Postmodern Physics

To Hoffman, physics is incomplete. It’s incomplete because it doesn’t include consciousness. However, this isn’t only about physics failing to explain consciousness: it’s also about physics failing to incorporate consciousness. Thus, consciousness is primary to Hoffman, yet he states “that is not what has been the observation of the last few hundred years of science”.

Hoffman also asks: “In the case of conscious experience, is it enough to merely talk about kind of the laws of physics that we know?” He adds: “There is no meaningful science that can be done without entraining consciousness in it.”

So does Hoffman actually offer us an alternative physics? Yes.

Hear out Hoffman using a lot of technical terms in a short space of time:

“The high-energy theoretical physicists in the last 10 years have discovered these positive geometries beyond spacetime and quantum theory. And behind those positive geometries, they found these combinatorial objects that classify them. They’re called decorative permutations.”

What is the relevance of Hoffman’s talk of “positive geometries” and “decorative permutations” to his idealism? He explains:

“So we’ve taken off the headset, the space-time headset, and we’ve gone outside for the first time, and we’re finding these obelisks, these positive geometries outside of spacetime and these combinatorial objects.”

The important word here is “headset”. Spacetime is a headset. It’s not, well, reality.

In terms of “empirical tests”, if not predictions, Hoffman does explain himself. His claim is that his theories can be tested, and they do (or can) include predictions. Yet that’s simply because his metaphysical speculations are “projected” onto already-existing physics. Relatedly, Wolfram himself seems to suspect that if Hoffman’s maths were to ever actually work to, say, predict a particle, it would only be because it had recreated the computational graphs that he is already studying.

In Hoffman’s own words:

“What we’re trying to do is to show that we could get all of physics, plus more, from a theory of conscious agents being assumed to be fundamental outside of spacetime and projecting through decorative permutations positive geometries into space-time where we can make our empirical test.”

Again, Hoffman’s metaphysics is projected onto spacetime and the real world of physics. Therefore, the tests and predictions Hoffman cites will fall within the domain of physics, not his own metaphysics.

Idealism as a Use of Occam’s Razor

Wolfram picks up on Hoffman’s use of the words “Occam’s razor”, which is interesting because panpsychists use this term too — or at least they discuss the parsimonious nature of panpsychism. (Other philosophers who advance other isms do so too.) In Hoffman’s case, starting the whole show with conscious agents may well seem to be a particular use of Occam’s razor. In other words, boiling the whole of physics, spacetime, trees, biological persons, brains, etc. down to conscious agents and their interactions — in an ultimate example of reductionism! — does seem ontologically parsimonious. Let’s see how Wolfram puts it:

“Let me see if I understand you [Hoffman] correctly. In the same way that we observe general relativity because of the kinds of observers we are in the Wolfram model, and in the same way that we see quantum mechanics because of the kinds of observers we are in the Wolfram model, we also, many people, many philosophers, many cognitive scientists, for instance [ ].”

Hoffman uses the term “Occam’s razor” and says (in Wolfram’s words) “look, we can move beyond spacetime and we can find something that can give rise to the physics that we have”. But then Wolfram believes he’s found a self-referential trap. He concludes by saying “Occam’s razor itself may be something that we find appealing because of the kinds of observers we are”. If consciousness is literally everything, then any use of Occam’s razor is solely down to consciousness too. Thus, Hoffman isn’t using Occam’s razor to get to the fundamentals of physical theory and reality, but to analyse the consciousnesses which have given birth to the notion Occam’s razor.

Much more broadly, Wolfram argues that if mathematics is part of the what Hoffman calls the “headset”, then using it to build a fundamental theory is self-referentially problematic.

Hoffman’s Conscious Agents

Hoffman says that it’s more correct to say that “observers that have conscious experiences” are the fundamentals or building blocks of his theory. But, surely, an observer is over and above (mere) consciousness. Perhaps because of that, Hoffman himself says, “If you imagine an observer that has no conscious experiences, it’s not really clear what we’re talking about.”

What do these conscious agents do? Hoffman explains:

“So it’s like a network of interacting conscious agents. So it’s a social network, and it’s governed by Markovian dynamics.”

Hoffman says that “it’s like” a network of interacting social agents. Some people who’ve read Hoffman will have thought that it literally is a network of conscious agents. Isn’t that the whole point of the graphs and schematics in Hoffman’s work — that we literally have a network of interacting conscious agents?

Two other words are odd too: “governed by” (as in “governed by Markovian dynamics”). Don’t Markovian dynamics describe or map the network, not govern them? Hoffman makes it seem as if the (Markovian) map is more important than the territory (i.e., a network of conscious agents). Thus, is Hoffman’s map calling the shots?

Yet Hoffman himself does use the word “describing” elsewhere in this YouTube debate when he claims that the

“Markovian kernel is basically describing, given that my current experience is red, what’s the probability the next one will be green and so forth, and you can write down a matrix of it”.

This passage is astonishing. Even though Hoffman says that the Markovian kernel is describing (rather than governing) stuff, it’s still hard to make sense of his claim. The very sentence “given that my current experience is red, what’s the probability the next one will be green” strikes me as being bizarre, almost surreal. (Sure, there may have been a lot of work elsewhere to explain this move from an experience of red to an experience of green, but I haven’t seen it.) And what work is mathematical probability doing here?

What follows this is radical, and hard to understand. Hoffman links his talk about Markovian dynamics and conscious agents to things beyond spacetime. In Hoffman’s own words:

“What we’re doing then is saying, can we take this Markovian dynamics and first show that we can project onto the decorative permutations that the physicists have found, and then from there project onto the positive geometries?”

Here the reader will need to know what “decorative permutations’” and “positive geometries” are. The reader will then need to know how Hoffman is using the word “project” (as in “we can project onto the decorative permutations” and “project onto the positive geometries”). More importantly, what philosophical work are these projections doing? (Hoffman does say that a “projection” is a “dramatic simplification of the more complex, yet more unified, dynamics of [conscious agents]”.)

Hoffman then brings all the above back to consciousness and how it impacts on his view of spacetime when he concludes:

“We can project all the way into spacetime, and then we would actually be able to make testable predictions inside space-time from a theory that says consciousness is fundamental, and we start there.”

Some readers may have absolutely no idea how “predictions” fit into all this. What kind of predictions is Hoffman talking about? However, as already stated, Hoffman’s metaphysics is projected onto spacetime and the real world of physics. Therefore, any predictions he or others make will fall within the domain of physics, not Hoffman’s own metaphysics.

Conclusion: Hoffman’s Pythagoreanism

Wolfram sums up one of the main problems he has with Hoffman’s theory of consciousness in a single clause. He states that he’s

“hoping that there’s more to consciousness than Markovian matrices, because that’s a shockingly minimal kind of view”.

Considering that Wolfram is a mathematician and Hoffman isn’t (though Hoffman may well have used Markovian kernels, probability theory, etc. in his previous work in cognitive psychology), it’s ironic that Wolfram spots a kind of Pythagoreanism (without actually using that term) in the words of Hoffman. Wolfram also makes the point that he’s “never [been] a believer in theories that have [mathematical] probability as a fundamental component”.

Wolfram wants to use Hoffman’s maths as a way to describe how an observer processes the universe. However, it’s easy to conclude that Hoffman himself tacitly believes that the maths somehow creates the universe… Yet that belief isn’t idealism! This means that Hoffman balances on the line between Pythagoreanism and idealism, and that’s largely because, as a scientist, he became intent on using mathematical models to justify his metaphysical idealism.

The bottom line here is that there’s a certain triviality in Hoffman’s use of mathematics to describe (or justify) his idealism. That’s because, in a strong sense, almost anything can be mathematicised.