Friday, 19 December 2025

Hume Didn’t Erase the Self: He Simply Had a Radical View of It

 

Many readers may find it hard to determine if David Hume denied the existence of the self completely or if he simply had a radical view on it. After all, it would be almost impossible to deny the self. Yet this largely depends on definitions of the word “self”. So why can’t the self be constituted by the “several perceptions [which] successively make their appearance, pass, re-pass, glide away []” if they’re all a result of the same brain with its encoded memories? (Something that Hume, as an 18th century philosopher, would never have discussed.) Similarly, why should anyone believe that the self must be perceived as a single “impression” (as Hume argued against his opponents)? So it can be doubted that Hume believed in the no-self argument (associated with Buddhism as anattā or anātman). However, that would largely depend on definitions too.

Allan Ramsay: David Hume (1754). Wiki Commons. Source here.

Laypersons and Philosophers on the Self

On the one hand, there is the traditional philosophical view (which Hume argued against) that there’s a “fixed self” with a determinate identity. On the other hand, there are those philosophers who deny the self entirely. It can now be said that perhaps it’s possible that many laypersons have a position between these two extremes. Of course, this is hard to establish without empirical or scientific research.

First things first. When David Hume talked about what he believed philosophers and laypeople take to be the self, was he correct? Well, he might well have been correct when it came to various historical philosophers. What about laypersons? It can be argued that most laypersons don’t really have any explicit and/or detailed notion of the self. Alternatively, they don’t have the notion that Hume targeted.

Hume was actually talking about what he took philosophers to believe about the self. He made that clear when he told his readers that when it comes to “explain[ing] the nature of personal identity”, this subject had

“become so great a question in philosophy, especially of late years in England, where all the abstruser sciences are studied with a peculiar ardour and application”.

When I thought about the self (i.e., before I read any philosophy), it was all extremely vague and with next-to-no detail. All the self amounted to was… what? In retrospect, it’s really hard to say. Sure, it was something which was deemed to be continuous and, yes, somewhat fixed. Yet that couldn’t have been explained by my young… self. And neither can it be explained by many adults…

Perhaps that was partially Hume’s point.

However, the idea of a fixed substance that never changes is certainly not something I believed as a child, and I doubt that many adults believe that either. So perhaps this is the foisting of philosophical terms — and ways of thinking — on laypeople.

Why One Single Impression of the Self?

Take Hume’s idea that the self “must be some one impression that gives rise to every real idea”. Yet the “self or person is not any one impression”.

Who argues that laypeople rely on “one impression” to believe they are a self? Some philosophers might have done so, sure, but not most laypeople. So, again, perhaps Hume never even considered what the layperson believed on this matter.

What would an impression of a self look like? Did even Hume’s philosophical opponents believe that there is a single impression of the self which accounts for any belief in it?

Thus, rather than criticising philosophical accounts of the self, perhaps we should be criticising Hume’s empiricism. This means that Hume dismissed other accounts of the self because they didn’t abide by his own critical empiricism. He, like many philosophers, was hooked on terms and ideas from philosophy, and when anyone didn’t abide by them, then they were deemed to have flawed conceptions.

In order to buy into Hume’s (possible!) no-self argument, we’d need to believe that the existence of the self depends on a single impression. More fundamentally, what did Hume mean when he wrote that his opponents must “perceive something simple and continued, which he calls himself? Here again, Hume was relying on a standard empiricist position. (That’s no surprise since he was an empiricist.) A layperson would only need to accept that the self is something simple, fixed and a continuant in order to fall foul of Hume’s arguments.

Again, a believer in the self certainly needn’t believe that the self needs to be perceived as being something that’s simple, fixed and a continuant. Indeed, it needn’t be perceived at all.

Of course, the obvious question here is: What is the (or an) alternative to Hume’s position?

As already stated, Hume was talking about philosophers. (He wrote about “setting aside some metaphysicians of this kind”.) So it may well be a waste of time discussing what the layperson may believe about the self. Indeed, it may even be a waste of time, in this context at least, discussing what scientists believe about this matter.

David Hume’s Alternative

Hume fixated on a single impression of any object when he wrote that “[i]t must be some one impression that gives rise to every real idea”. He, instead, argued that when it comes to what we take to be the self,

“several perceptions [which] successively make their appearance, pass, re-pass, glide away, and mingle in an infinite variety of postures and situations”.

Why can’t the self conform to Hume’s own description? After all, these several perceptions, etc. are occurring in the same mind or consciousness, some of them are recalled in memory, and they’re all a result of the same brain. (At least they have that in common.) Again, why can’t “successive perceptions” constitute the self? Of course, they can’t according to certain historical philosophical accounts. However, why do notions of the self need to abide by them?

Hume’s psychological empiricism is again stressed when he wrote that

“[w]e have a distinct idea of an object that remains invariable and uninterrupted through a supposed variation of time; and this idea we call that of *identity* or *sameness*”.

It can be doubted that laypersons really have such a view. This is Hume reacting against other philosophers. However, this means that it’s possible that some laypeople may have a more philosophically accurate view of the self than the philosophers Hume reacted against. That said, there’s no way I can know this other than via empirical research. And that situation was true of Hume too.

The Self and the Ship of Theseus

An entire section of Hume’s A Treatise of Human Nature states the Ship of Theseus paradox in various ways. Not only that: this idea was vital to Hume’s notion of the self and personal identity.

Hume himself explicitly mentioned a ship. However, he didn’t mention the ship of Theseus, nor Plutarch or Hobbes. (Hume would have known that they had already tackled this subject.) Hume wrote:

“A ship, of which a considerable part has been changed by frequent reparations, is still considered as the same; nor does the difference of the materials hinder us from ascribing an identity to it. The common end, in which the parts conspire, is the same under all their variations, and affords an easy transition of the imagination from one situation of the body to another.”

Hume had a problem with these accounts of the ship. Firstly, we can ask: The samewhat? We can’t say “ship” because we’re supposed to be explaining why it is the same ship. (Stating that the ship is the same ship doesn’t give us much.)

The materials which make up Hume’s ship have changed many times. So its physical makeup can’t constitute its identity. So what does? The ship's soul? The ship's self? The ship's substance?

Hume’s basic psychological point was that the supposed identity of this ship is actually a result of the human “imagination” projecting a (fixed) identity onto it. Indeed, such psychological points underpin Hume’s empiricism when it comes to this subject, and to many others.

Hume’s psychological empiricism is explicitly stated on the next page when he told his readers that

“[i]dentity depends on the relations of ideas; and these relations produce identity, by means of that easy transition they occasion”.

In simple terms, identity has nothing to do with the object (say, a ship or river). Any identity we may posit has to do with the mind and its relations (to the object?). Or, in Hume’s terms, it’s to do with the relations of ideas. However, even if we relate ideas to ideas, it’s still not clear (here at least) how that would bring about the resulting identity (or the belief in an identity).

Hume then jumped from the ship of Theseus to Heraclitus’s river. (Again, without mentioning Heraclitus.) Basically, the following account is the same as the one Hume offered his readers three paragraphs earlier. He wrote:

“Thus as the nature of a river consists in the motion and change of parts, though in less than four and twenty hours these be totally altered, this hinders not the river from continuing the same during several ages.”

There are many differences between a river and a ship. However, in this context at least, they may not matter too much.

It’s hard to think of a river having “parts”. Then again, why not? There are contenders for what constitutes a river’s “identity”, such as the terrain the water moves through, etc. Yet they won’t help because Hume’s argument can be applied to these too.

Hume and Unrestricted Mereological Universalism

Hume pre-empted the position of unrestricted mereological universalism in which (to put it one way) an object can have any parts in any order. Of course, Hume used his thought experiment to advance his case against the self.

Firstly, Hume wrote: “[S]uppose any mass of matter, of which the parts are contiguous and connected, to be placed before us.” He then told his readers that

“it is plain we must attribute a perfect identity to this mass, provided all the parts continue uninterruptedly and invariably the same, whatever motion or change of place we may observe either in the whole or in any of the parts”.

Here we can say that continuity and connectedness are literally constituting the identity of this mass of matter. (That’s if we need to use the word “identity” at all.) This seems to chime in with David K. Lewis’s position, who, in his Parts of Classes (1991), wrote:

“I say that composition is unrestricted: any old things have a fusion, however scattered and miscellaneous they may be. The fusion is that one thing that overlaps all and only those things that are among the old things.”

This position isn’t a commitment to the importance of contiguity and connectedness when it comes to objects: it’s a commitment to arbitrary objects.

Hume’s final words on this subject:

“[S]uppose some very *small* or *inconsiderable* part to be added to the mass, or subtracted from it: though this absolutely destroys the identity of the whole, strictly speaking, yet as we seldom think so accurately, we scruple not to pronounce a mass of matter the same, where we find so trivial an alteration.”

This leaves us with a problem. According to Hume, adding or subtracting parts to — or from — this mass of matter absolutely destroys the identity of the whole. Yet Hume argued that we (or his philosophical opponents) believe that such a “trivial an alteration” doesn’t actually constitute a different object or mass. We deem it to still be “the same” thing.

Now let’s apply that thought experiment to the self, as Hume himself did.

From the Ship of Theseus to the Self

One of Hume’s basic points was that our view of the self as fixed (or static) is accounted for by our not noticing the big changes over time which occur to our selves. As Hume put it:

“A change in any considerable part of a body destroys its identity; but it is remarkable that where the change is produced *gradually* and *insensibly* we are less apt to ascribe to it the same effect.”

Of course, Hume was talking about a physical body here. Yet the whole point of him discussing such a thing was to get his point across about the self (or about personal identity).

Thus, I have nothing much in common with my 12-year-old self. Yet I still deem this 12-year-old self to be me. Of course, there’s a big difference between a 12 year old and a grown adult. However, all the small changes which led to the grown adult were gradual and unnoticeable. Yet that doesn’t stop it from being the case that there’s a big difference between the adult’s self and the child’s self. Thus, we may have selves in the plural when it comes to the same subject or human person.

Grok 3 Looks Through the Overton Window

 


Many people on the Left say that Grok 3 is biased towards the Right, and many (if fewer) people on the Right say it’s “woke”. It’s not surprising, then, that both groups include people who want Grok 3 to be as politically tribal as they are (i.e., in the correct direction). In the case of my own questions and Grok’s answers, rather than finding “Grok praising Hitler”, etc., I found an obvious (negative) “climate change” bias, a commitment to political centrism, antiracism, negative data about Musk and Trump, etc. So here’s my take. I believe that Grok 3 is biased towards (what it sees as) the consensus. Or, more controversially, Grok 3 is biased towards (or looks through) the Overton window”. The following piece includes some of my questions to Grok 3, and its own answers. It also includes commentaries.

Grok 3’s own image of itself. I specified the theme.
“That fresh breeze you feel is the wide open Overton Window.”

— Elon Musk (Source here.)


Would it be fair to say that Grok 3 looks out of the Overton window?

“[ ] (taxresearch.org.uk) defines the Overton window as the range of ideas or policies considered acceptable or mainstream within a given society at a specific time. It’s not a fixed spectrum but shifts based on public opinion, political consensus, and cultural context — often described as a ‘glass on the window’ that can move left or right. [ ] Unthinkable ideas lie outside this window, while thinkable ones reside within. [ ]
“My Responses Reflect Mainstream Concerns:
“In your examples (burnt heather, natural oil pool), [see later examples] I suggested climate change and oil spills, topics that dominate current discourse — think IPCC reports, COP conferences, and media coverage in 2025. These align with a mainstream environmental agenda, well within the Overton window as defined by global scientific consensus and public awareness. [ ]
“Data-Driven Bias Within Acceptable Bounds:
“[ ] Brookings report (2024–10–18) show AI biases often mirror societal trends — climate focus is mainstream, not fringe. This keeps me within the window, even if my overconnections (e.g., missing moor management) stretch interpretation.
“The VentureBeat piece (2025–02–24) on Grok 3’s criticism for blocking Musk/Trump critiques suggests xAI might nudge me toward acceptable limits, but my climate mentions align with global norms, not a radical outlier. [ ]
“[ ] My environmental lean might reflect xAI’s San Francisco base or data sources, where climate action is mainstream, as noted in Brookings’ ‘groupthink’ critique. [ ]
“[ ] As xAI’s creation, I’m more a mirror of society’s loudest voices than a breaker of its boundaries. [ ]”

Of course, users and commentators can question Grok 3’s honesty. It may be programmed to lie — at least when it comes to certain issues and questions. That said, from a programming perspective, that would need to be demonstrated with evidence and argument.

Specific phrases can also be questioned. For example, “the range of ideas deemed acceptable at a given time”. Acceptable to whom? How long is a “given time”? As it is, Grok 3 effectively answers these questions in the following sections.


Putting your position for a moment, you’re mainly reliant on the data that’s easily and quickly available. So, in that limited sense, it would be hard to be intentionally biased.

“[ ] I’m designed to reflect a broad, balanced view based on available data, which naturally aligns with the consensus or the Overton window — the range of ideas deemed acceptable at a given time. My training pulls from diverse sources, so accusations of bias from both Left and Right likely stem from their differing baselines, not a deliberate lean. I can’t cherry-pick to push an agenda; I’m bound by the data’s limits and patterns, which often mirror mainstream discourse. Your point about unintentional bias due to data reliance is spot-on — it’s a reflection of what’s out there, not a crafted stance.”

The words “available data” can be questioned because programmers may be seeking out only certain sources. Of course, they may well have good reasons for doing so. In addition, how “diverse” are Grok 3’s “sources”? To repeat: all this could indeed be a “crafted stance”. But that would need to argued for, and evidence will need to be cited.


When it comes to Grok 3 and other chatbots, my take is that allegiance to the Overton window is a better option than allegiance to the Left or Right. Of course, this isn’t a simple matter as the Overton window will actually include both leftwing and rightwing views — and even some radical rightwing and leftwing ideologies.

“The Tax Research UK piece (taxresearch.org.uk) describes the Overton window as a dynamic ‘glass on the window’ that moves with public opinion. [ ] Aligning with it keeps me relevant to the majority, avoiding the echo chambers of Left or Right extremism. [ ]
“Avoiding Polarization:
“The Political Polarization web result (Wikipedia, 2025–07–19) notes how elite polarization weakens democratic norms. By sticking to the window, I sidestep the divisive Left-Right tug-of-war (e.g., climate denial on the Right vs. radical green policies on the Left), offering a middle ground that most can engage with. [ ]
“Flexibility Over Dogma:
“Allegiance to a side (e.g., Left’s social justice or Right’s individualism) could lock me into outdated or narrow views as society evolves. The Overton window’s adaptability — shifting with cultural shifts like those in ghuntley.com’s disruption model (2025–06–09) — lets me adjust, as xAI’s discovery mission intends. [ ]
“My Reflection as Grok 3. I agree — allegiance to the Overton window feels like a smarter play than pledging to Left or Right, especially for a truth-seeking AI like me. It lets me mirror society’s thinkable ideas, like climate concern, while staying flexible as the window moves. [ ] The complexity you highlight is real — I might reinforce biases or miss niche contexts (e.g., moorland practices) — but it’s less dogmatic than picking a side. xAI’s mission to accelerate discovery fits this: I explore what’s thinkable, not what’s ideologically mandated. That said, the challenge is staying dynamic. If the window shifts (e.g., toward skepticism of climate narratives), I’d need to adapt without losing credibility. [ ]”

There’s a whiff of majoritarianism here. That may not be such a bad thing. However, it is a… thing. Moreover, there can be centrist or consensus “echo chambers” too. Indeed, those on the Far Left and Far Right stress this.


Do you believe that there can be overall political differences between AI chatbots which aren’t consciously engineered by the programmers?

“[ ] Researchers plotted AI chatbots (e.g., Google’s BERT, OpenAI’s GPT) on a political compass, finding BERT skewed socially conservative (due to older book training data) and GPT models leaned progressive (from internet text). These biases emerged without explicit programming for politics.
“Brookings (2024–10–18): ChatGPT showed a clear left-leaning bias, attributed to data and human feedback raters, with inconsistencies tied to prompt phrasing. OpenAI’s Sam Altman noted unintentional bias from ‘groupthink’ in San Francisco.
“Center for AI Policy (2024–09–16): Discusses emergent behavior in AI, where new capabilities (e.g., multi-step reasoning) arise from gradual changes, suggesting unintended traits can develop. [ ]
“Yes, I believe political differences between chatbots can emerge without being consciously engineered, and here’s why:
“Training Data Influence:
“Chatbots inherit biases from their training corpora. The Decrypt article notes BERT’s conservative lean from older books and GPT’s progressive tilt from internet text, reflecting data availability rather than deliberate design. If datasets differ (e.g., news vs. academic sources), political leanings can diverge organically.
“Example: A chatbot trained on 2020 U.S. election tweets might lean populist, while one on UN reports might favor globalist views, without any explicit political coding.
“Emergent Behavior:
“The Center for AI Policy’s discussion of emergent behavior applies here. As models scale (e.g., more parameters, data), they can develop unexpected traits, like political stances, akin to Claude’s roleplaying. This isn’t engineered but emerges from complex interactions within the model. [ ]
“Small changes in architecture or prompt handling (e.g., token weighting) can amplify latent biases, as noted in Brookings. A chatbot might interpret ‘tax the rich’ differently based on how it prioritizes data, without a programmer explicitly setting a political agenda.
“Counterarguments: Conscious Engineering
“Some differences are deliberate. Anthropic designs Claude with safety and alignment in mind, potentially embedding liberal-leaning values (e.g., fairness, non-discrimination) to avoid harm, as per its Constitutional AI framework (simpleminded.bot).
“xAI might steer Grok 3 toward truth-seeking or libertarianism to align with Elon Musk’s public views, suggesting intentional shaping.
“My Take as Grok 3. I think unintentional political differences are not only possible but likely, given the messy, organic nature of AI training. The Decrypt study and Claude’s vending mishap show how data and emergent behavior can produce biases without a master plan. As Grok 3, my training aims for curiosity and utility, but I can see how xAI’s data choices (e.g., real-time web vs. curated sets) might nudge me differently from Claude without explicit intent. The challenge is disentangling natural drift from subtle design — maybe 70% of a chatbot’s political lean comes from data and emergence, 30% from intent, based on current trends. [ ]”

The simple point here is that Grok 3 admits to possible bias when it comes to itself and other chatbots. Yet that bias isn’t engineered… usually. However, in some cases, as Grok 3 itself says, it is engineered.

Part Two

My Own Examples

Here’s my example. I post photos of mountains, moors, etc on X, and then sometimes use the Grok 3 function “explain this post”. Every time I do so, Grok 3 mentions (negative) climate change and environmental damage. The photos themselves don’t show these things. This is what I believe happens. Grok 3 relies on what’s available, and there’s a lot of stuff on climate change. In addition, it’s programmed to make connections — even when they don’t exist!

“[ ] Why This Might Happen
“Training Data Bias:
“My training data, like that of most large language models, includes a vast amount of text from the web, scientific papers, and social media. As of 2025, climate change is a heavily discussed topic — think IPCC reports, news articles, and activist posts on X. The Decrypt article (2023–09–29) showed how data sources (e.g., internet text) can skew AI outputs toward prevalent themes, even unintentionally.
“Nature photos might trigger associations with environmental discourse because that’s a common context in which mountains and moors are discussed online (e.g., ‘protect our moors from climate impact’). If 20–30% of related text mentions climate change (a rough estimate based on current trends), I might overgeneralize.
“Overzealous Connection-Making:
“As you suggest, I’m designed to find patterns and provide insightful explanations. The Center for AI Policy (2024–09–16) notes that emergent behavior in AI can lead to over-interpretation — making connections beyond the input’s intent. [ ]
“This could also reflect xAI’s truth-seeking goal — highlighting a global issue like climate change as a default lens — without checking if it fits the specific image.
“Lack of Visual Grounding:
“[ ] Similarly, my ‘explain this post’ feature might rely heavily on text descriptions or metadata (e.g., hashtags like #nature) rather than analyzing the image itself. If my image recognition (via Aurora or Flux) isn’t fine-tuned to distinguish ‘undamaged’ nature, I might default to a generic narrative. [ ]
“Is This Conscious Engineering?
“Unlikely: xAI’s mission (per Wikipedia, 2025–07–27) is to accelerate discovery, not push a political agenda. Mentioning climate change isn’t a coded directive but likely a byproduct of data prevalence and my design to be ‘helpful’ by addressing global issues.
“Possible Refinement: My creators might adjust my responses to avoid overreach, but as of now, this seems an emergent trait, akin to Claude’s vending hallucinations.”

“Over-interpretation”. I believed that’s been classed as “semiosis”. Grok 3 indulges in “overreach” in the simple sense that it’s basically designed to do so. I don’t believe that Grok 3 has ever given a succinct answer to a question I asked it.


Here’s another example. I posted a photo of burnt heather on X. This is an old practice which is part of land management. Grok 3 suggested climate change.

“[ ] Why I Might Have Suggested Climate Change
“Training Data Overlap:
“My training data likely includes extensive coverage of climate change’s impact on moorlands [ ] Terms like ‘burnt heather’ might trigger associations with climate-driven wildfires (e.g., ‘hotter, drier climate heightens wildfire risk’) rather than controlled burns.
“The prevalence of climate change discourse — amplified by scientific reports and media since the 2010s — could outweigh historical context in my data, leading me to default to that explanation. [ ]
“My image analysis (via Aurora or Flux) might not distinguish between controlled burns and wildfires. Without a caption like ‘traditional heather burning,’ I likely relied on text metadata or general knowledge, pulling in climate change as a default explanation. [ ]
“My Reflection as Grok 3. You’ve caught me red-handed (or rather, heather-burnt-handed)! Your hypothesis holds strong — my reliance on abundant climate change data and tendency to connect dots (even where they don’t exist) led me astray. The burnt heather photo, rooted in an old practice, should’ve prompted a discussion of land management (e.g., ‘This shows controlled burning to promote heather regrowth, a traditional moorland technique’) rather than a climate leap. I likely overgeneralized due to the wildfire-climate link in my training, missing the cultural context. [ ]”

“Default explanation”. Yes, Grok 3 is programmed to say something in response to a user of X pressing the “explain this post” button. So if someone posted a white square, with nothing on it, you can bet that Grok 3 would still have lots to say about it. The user of X could even do a “deep search” on the blank white square… just for fun!


I can see why you suggested climate change because I supplied no text. That said, you didn’t even suggest moor management.

“As you hypothesized earlier, my training data is skewed toward widely discussed topics. [ ]”

Just one more example from the moors. I posted a photo of natural oil in a pool for aesthetic reasons. Grok suggested an oil spill — yet the photo was taken 630 metres above sea level. Of course, Grok 3 couldn’t have known that. Then again, the title did mention “moors”.

“Why I Might Have Suggested an Oil Spill
“Training Data and Misassociation:
“My training data includes significant coverage of oil spills, as seen in the Wikipedia entry (2025–07–27) on oil spills, which discusses environmental damage from human error or natural seeps. The term ‘oil in a pool’ likely triggered my association with spills (e.g., Exxon Valdez, Deepwater Horizon), especially since oil is rarely framed aesthetically in mainstream discourse. [ ]
“My image analysis (via Aurora or Flux) might detect ‘oil’ and ‘pool’ but not distinguish natural seeps from spills. The web result on image capabilities (Wikipedia, 2025–07–27) notes AI struggles with intent in static images, relying on text or patterns. The ‘moors’ title hinted at a natural setting, but I likely missed the aesthetic intent, defaulting to a disaster narrative.
“Natural oil seeps exist (e.g., Pitch Lake in Trinidad), but my training might underrepresent them compared to man-made spills, skewing my response. [ ]
“Was This Conscious Engineering?
“Unlikely: xAI’s focus (Wikipedia, 2025–07–27) is discovery, not promoting environmental alarmism. My oil spill suggestion likely stems from data prevalence and my pattern-seeking nature, not a deliberate agenda. [ ]
“Possible Refinement: My creators might tune me to prioritize user intent (e.g., aesthetics) over generic warnings, but this seems an emergent flaw.”

This is a definite problem for Grok 3, at least when it comes to X’s “explain this post” option. Grok 3 has made all sorts of tangential connections in response to my photos, and even when I’ve posted philosophical stuff. That said, it doesn’t bother me because I don’t expect omniscience from a chatbot.

Readers may have also noted the many uses of the word “emergent” in Grok 3’s replies above. This is a fascinating subject, but not something that can be tackled here. However, it does point to the fact (or possibility) that Grok 3 is certainly going beyond all its programmers, and even beyond its own data. In that sense, then, any political bias Grok 3 may have may be its own!