Saturday, 20 December 2025

Artificial Intelligence Could Destroy Your Creative or Intellectual Work

 


Despite the title above, this essay isn’t going to offer a critique of AI. It’s actually about an ongoing situation, regardless of positives or negatives. (My overall position is positive toward AI.) It also focusses on philosophy. However, the central position can easily be broadened out… If a powerful AI can (to use a popular word from YouTube) “destroy” any essay, article or paper (or all essays, articles and papers), then where would that leave human authority and creativity? Say that an AI tackles a single paper — or even 100 papers in a row. It will do so by tackling literally all of the arguments, concepts, data, etc. contained within it. How should philosophers react? How does their work retain its worth and meaning under such a AI barrage? More broadly, AI’s epic critiques could even erode philosophical foundations and leave only mathematics and logic on safe ground. So is it that an AI doesn’t see “the truth” that’s available to human persons? (As Roger Penrose argues within a more limited context.) Consequently, can it be argued that such an AI is the best nihilist ever?

Image by Grok 3, in obedience to the prompts supplied by the writer.

Readers can bet (or they’ll already know) that many people have used AI to criticise various philosophical texts. However, the position being advanced in this essay isn’t about that. (This isn’t about AI’s take on, say, Descartes or Heidegger.) It raises wider issues about AI being able to destroy literally any philosophical text.

Of course, philosophers should be able to take criticism… at least in theory. And if they see this scenario as a bad thing, then, qua philosophers, they should be able to argue why that’s the case.

There is one way to test all this. Readers can post their philosophy essay, article or paper and say to the AI: “Criticise this in an unrestrained manner.” They can then add one caveat: “Don’t include any pedantic criticisms.”… [See note 1.]

So the criticisms can be restricted in the prompt itself. For example, the AI user can state that stylistic and grammatical criticisms (or criticisms that could be deemed to be pedantic), should be excluded. But I doubt that would make much of a difference. The criticisms would still be of huge proportions.

So stylistic, grammatical, etc. criticisms can wait for another day.

To repeat. Readers can ask an AI to offer criticisms of a medium-sized philosophy article, essay or paper. They can say: “Please feel free to criticise literally everything within it.” That would mean, in theory at least, that the AI would mount pages and pages and pages of criticisms. Why would that happen? Because such AIs have vast databases. More concretely, they could find problems with almost — or literally! — every line in the aforementioned article, essay or paper.

Again, an AI wouldn’t produce pages and pages of criticism unless the user explicitly prompted an exhaustive analysis.

Perhaps if the AI were questioned about what it’s doing, it may say that only “constructive criticism” is its goal. Yet this is about an AI being able to destroy an article, essay or paper regardless of whether or not the criticisms are constructive.

Of course, the human writer could get back on some of the criticisms.

There’s a problem here too. Say that the writer or student revises his essay, article or paper after extensive AI criticisms. And then he or she offers the revised work to the AI…

Yes, you’ve guessed it!

The AI could return fire with more and more and more pages of soul-destroying criticisms… That’s unless the writer or student simply replicates what the AI itself has written on the subject. And even then, perhaps the AI would simply criticise its own criticisms!

Now let’s make this scenario more concrete, real and personal.

AI Destroyed My Photo

I once asked a LLM to offer some strong and unrestrained criticisms of one of my photos…. and it did!

I was shocked, and a little disturbed. I had no idea my photo was so…

Yet, at the same time, I knew that the AI, qua AI, had a vast database of criticisms to rely on. So, perhaps in a crude sense, that AI is an “expert” on photography. The funny thing is that the AI was right, at least in some respects. It was certainly right technically. That said, it didn’t even recognise what I was attempting to do in the photo.

This reference to my photo relates to another confession.

I didn’t believe that I had the guts to offer one of my own essays for unrestrained criticism. So I chose someone else’s instead. That admission may come across as if I don’t like criticism. However, unrestrained criticism from an AI is a different kettle of fish entirely.

The Hurt Feelings of Human Persons

An AI offers criticisms at lightening-fast speed. This may overwhelm the writer or student. Yet isn’t this why some chatbots endlessly recap what has so far been discussed?

In actual fact (try it!), AI can deliver endless criticisms of your article, essay or paper. That’s a work that you’re emotionally invested in. This may hurt your feelings. The pages and pages of criticisms may also overwhelm you. Yet you could easily sort that out by constructing the appropriate prompts.

In any case, an AI isn’t a social worker or a psychotherapist.

Critics may also argue that AI devalues human input or perspectives. Regardless of specific positions or experiences, an AI critique may well miss the “human context”. It may end up with (mere) “logic chopping” or “cold logic”.

All this would be to assume that all human persons have interesting or much “lived experience”. And, obviously, it assumes that lived experience matter. (Does lived experience matter in discussions in the philosophy of mathematics?)

On a more arcane level. Say that a young philosopher (or theoretical physicist?!) indulges in speculative theorising. One can guess that in most cases an AI would find it easy to destroy such speculations. This may well turn the destroyed philosopher into a “conservative” philosopher or a philosopher of safety or caution.

Human Depth: AI Shallowness

The usual argument (or statement!) is that “AI has no depth” when it comes to philosophy — and perhaps when it comes to anything else too.

What is depth in the context of philosophy?

Is it knowing about context, the ability to use different kinds of argument, etc. But is there still an extra “something, I know not what”?

Perhaps it’s intuition.

Philosophers often rely on intuition or “gut instincts” when tackling an argument or philosophical problem. Intuition, therefore, seems to be something that AI can’t replicate. That may be so. It depends on what intuition is. In any case, many philosophers themselves have a deep problem with relying on intuition (or with the very notion of intuition) when it comes to discussing philosophical problems and issues.

Again, it’s often been said that an AI relies on a vast database which prioritises breadth over depth. However, if an AI has data in the form of facts, arguments, concepts, etc., then why can’t it also have data in the form of depth too? Why should the capabilities of AI suddenly stop at depth? Where is the line in the sand between shallowness and depth when it comes to AI?

It’s true that philosophy sometimes deals with human experience. This is especially the case in Ethics, existentialism, and other areas. It’s also relevant in the philosophy of consciousness and the philosophy of mind. So can we trust (or rely on) AI to deal with, say, “existential angst” or deep moral problems? After all, AI lacks consciousness or lived experience.

However, what’s to stop AI accessing the data about philosophers — and laypeople — who’ve experienced existential angst and moral problems? Indeed, couldn’t AI fuse all these examples of angst together to develop a deeper view of these human states?

The Simulation of x

The words “simulation” and even “mimicry” are often used in debates about AI.

So readers will now need to know what the words “simulating depth” mean. The point is that the word “depth” in this context may have little substance.

This tunes in with wider debates on AI. Take this position: “Simulating” x is an example of x.

Why use the word “mimic” or “simulate” at all?

Perhaps simulating depth is an example of depth. Or, if it isn’t, then why isn’t it? Of course, this doesn’t work for all values of x. As the philosopher John Searle has been keen to point out, a simulated rainstorm isn’t a rainstorm. However, “simulated” arguments, debates and discussions are arguments, debates and discussions.

Finally, an AI has access to a vast database of facts, texts, arguments, concepts, etc. So why should its abilities be limited to breadth rather than depth?


Note:

(1) But this is a bit vague. Are there objective criteria for establishing what is pedantic? In other words, would an AI be able to decide what’s pedantic, and what’s not pedantic? Well, yes. At least in a weak sense.

No comments:

Post a Comment