Concerns about ChatGPT’s moral status are greatly exaggerated
LLMs write beautifully about suffering, but they're not suffering – they're choosing statistically probable words. Meanwhile, an octopus avoids painful chambers, remembers, anticipates. One manipulates symbols; the other experiences genuine awareness.

I grew up consuming enough science fiction to believe that machines can have sentience, consciousness, and deserve moral consideration beyond what we'd give a toaster – which, unless we're crossing from sci-fi to fantasy, is and remains an object.
Here's what I'm convinced of: you don't need a biological brain with electrical impulses and neurotransmitters to have consciousness. But I'm also convinced that's not the case for Large Language Models like ChatGPT, Claude, Gemini, or DeepSeek. Look, they're intelligent. They manipulate text in incredible ways, doing things only humans could do before (and obviously doing things humans cannot do, like reading a book in seconds). But they don't understand what they're writing – never mind having a conscience.
I know it sounds cliché – "they write but don't think" – like Heidegger's observation about technology that doesn't think. But that's how LLMs work. They don't have a world model. They don't know words point to things in the world – just how words relate to each other. Which is enough to do plenty of things in our text-based society. As philosopher Maurizio Ferraris wrote, "nothing social exists outside the text". Our social worlds require registration (written traces, memory), but these registrations can construct our social world because they have a relationship with reality. The words – or more correctly tokens (word fragments) – that an LLM manipulates have only connections with other words or tokens. It's like a student who learns the textbook by heart before an exam: they can rephrase what they've read, but when the professor asks about connections between theories not explicitly covered in the book – as I experienced during my studies – the lack of deeper understanding becomes evident.
Aaron Rabinowitz, in a piece for The Skeptics published when I had already written most of this newsletter, puts it in terms of external and internal understanding: his definition is a little circular (external understanding is the mimicry of internal understanding which is the "true" understanding) but I agree with his conclusion:
The most advanced AI currently in existence possess increasing amounts of external understanding while likely not developing anything like internal understanding. This is why it is correct to say they display adult human levels of external understanding while lacking the internal understanding possessed by preschoolers.
An LLM predicts the next token based on statistical patterns from billions of texts. Type "the cat climbs on the" and it'll say "roof" or "couch" – not because it knows what a cat or roof is, but because it's seen that sequence thousands of times. We can add information that changes the output: maybe the roof is made of gold or chocolate if we specify that the cat is a fairy tale character. It could be powerful and useful, but it remains a statistical approach without any knowledge of cats and roofs.
We see the limits when text alone isn't enough to handle reality. There are plenty of examples of LLMs that fail in astonishing ways – astonishing if we assume LLMs have what Rabinowitz calls "internal understanding". Sure, most of the time they talk as well as – or better than – people. They seem to have inner lives. But they don't. They can say they're suffering, they can show signs of suffering, but they're not suffering. They're choosing the most probable word given the starting parameters. It makes no sense to treat them as moral subjects. Maybe other machines someday, but not LLMs.