All Quiet on the Knowledge Front. Why Epistemia, the Illusion of Knowledge, is Nothing New

A new study warns of "epistemia"—the illusion of knowledge in AI. But is this really new? From Plato to Frankfurt's "Bullshit," an exploration of the ancient gap between fluency and truth, asking if a mindless machine can still serve as a tool for understanding the world.

All Quiet on the Knowledge Front. Why Epistemia, the Illusion of Knowledge, is Nothing New
Photo by Aerps.com / Unsplash

I find myself using generative AI quite a bit these days. However, I am hesitant to call myself an enthusiast. To be a fan implies a kind of uncritical enthusiasm that I just cannot muster, mostly because I am acutely aware of the limits of these instruments.

An LLM—and while there are other technologies that qualify as artificial intelligence, we are all currently fixated on large language models like ChatGPT, Gemini, and Claude—does not possess an understanding of what it is saying.

I have written about this before, but it bears repeating: these models manipulate text in incredible ways, doing things only humans could do before (and, obviously, doing things humans cannot do, like reading a book in seconds). But they don't understand what they are writing—let alone possess a conscience. They are statistical engines, predicting the next likely token in a sequence, not minds contemplating meaning.

I have previously explored these limits from a moral perspective, asking what we owe to—or fear from—a machine without agency. But a recent research paper conducted by the team of Professor Walter Quattrociocchi has convinced me to return to the topic, this time on an epistemic level.

The paper in question is titled The simulation of judgment in LLMs. The researchers set up a fascinating confrontation between human and artificial evaluations of news articles. They benchmarked six different LLMs against expert human ratings (from NewsGuard and Media Bias/Fact Check) and against a group of non-expert human participants.

The goal wasn't just to see if the AI got the right answer regarding a site's reliability, but to understand how it got there. They used a structured framework where both the models and the humans had to select criteria, retrieve content, and produce justifications.

The results were noteworthy. The models’ outputs often aligned with expert ratings—in fact, they were quite good at flagging unreliable sources. However, the way they reached those conclusions was fundamentally different. The study found that LLMs rely heavily on lexical associations and statistical priors rather than the contextual reasoning humans use. In other words, they look for the shape of a reliable text, not the truth of it.

I fully agree with the authors’ conclusion that we need more public awareness of how LLMs think (or, if that verb annoys you as much as it sometimes annoys me, how they elaborate their outputs). We need this awareness not to avoid using them, but to properly integrate them into our epistemic system.

Nevertheless, I have a problem with a single, perhaps marginal, aspect of this paper. It may be marginal to the data, but I suspect it is crucial to Quattrociocchi’s intentions, given his specific focus on the concept of epistemia itself.

I am speaking about the concept of epistemia.

In the paper, epistemia is defined as the tendency to confuse linguistic form with epistemic reliability. It describes a condition where the appearance of coherent and authoritative judgment arises from statistical patterning alone, producing the illusion of knowledge when surface plausibility substitutes for evidence-based reasoning.

Essentially, it is the trap of believing the machine because it sounds smart.

Why does this concept leave me cold? For two reasons: there is little new in it, and I fear there is little useful in it.

To understand why, we need to take a step back.

Back to Plato

When I read the definition of epistemia for the first time, my mind didn't go to computer science; it went to Harry Frankfurt. In his essay On Bullshit, Frankfurt draws a sharp distinction between the liar and the bullshitter. The liar, he argues, is actually deeply concerned with the truth—he needs to know it in order to hide it or lead you away from it.

The bullshitter, however, is different. He is indifferent to how things really are. He does not care whether what he says is true or false; he only cares about the impression he makes. His focus is entirely on the appearance of the discourse, not its connection to reality.

Does this sound familiar?

We can go even further back, to the very dawn of Western philosophy. In Plato’s Gorgias, we find Socrates dismantling the Sophists. The Sophists were the rhetoricians of their day, teachers who claimed to teach the art of persuasion. Socrates argues that rhetoric is not a true art (techne) but a knack or routine for producing gratification. He compares it to cookery: medicine knows what is good for the body, while cookery only knows what tastes good. The Sophist, like the bullshitter, creates belief without knowledge.

So, epistemia feels like the new incarnation of this ancient tension between persuasive falsehood and real knowledge.

Of course, Sophists and bullshitters are not exactly the same as LLMs. You might call an LLM bullshit on steroids. While the human bullshitter ignores the truth, the LLM ignores intentionality altogether. It manipulates tokens. It has no relationship with the external world.

It is a closed loop of language. The exact contrary of human beings, who speak to describe the world. Or at least, that is what we tell ourselves.

Evolutionary psychologists like Robin Dunbar and cognitive scientists like Hugo Mercier and Dan Sperber suggest otherwise. Mercier and Sperber argue that the main function of human reasoning is actually argumentative—it evolved to persuade others and to evaluate their arguments, not necessarily to find the solitary truth. More radically, Dunbar suggests that language evolved as a form of social grooming, a way to bond and maintain complex social groups.

In this view, the fact that we occasionally communicate objective reality is almost a side effect. I must admit: these are bold theories, far from being the consensus in the scientific community. But they serve as a necessary check on our arrogance. When we accuse the AI of being performative, we must remember that human language is often just as performative.

There is one last point about this separation between true knowledge and pseudo-knowledge that mimics the appearance of truth. It is a central distinction, yes, but historically it has often been accompanied by a classist, exclusionary vision.

Plato and Socrates accused the Sophists of being unworthy partly because they accepted payment for their teaching. True knowledge, they argued, must be pursued for the sake of wisdom alone. While they highlighted a real problem regarding conflicts of interest, there is also an undercurrent of elitism—the idea that those who must work to live are somehow unworthy of the highest truths.

I fear the concept of epistemia could be used in a similar way: to exclude or devalue those who need LLMs to deal with their own limits. As a non-native speaker, I cannot write in English at this level without the assistance of an LLM. This is just one example. Think of those with dyslexia, ADHD, or simply a lack of specific formal training. Is the knowledge they produce or access via AI fake because they used a machine to bridge the gap?

Navigating the Sea of Artificial Text

So, is epistemia a useful concept for navigating our current epistemological landscape? As I said, there is nothing radically new here. But perhaps, one could argue, it is useful as a tool to develop awareness of LLM limits.

I remain skeptical.

First, we must admit that surface plausibility matters. In human interaction, fluency and clarity are important clues. We generally assume that someone who can explain a concept clearly understands that concept. It is a heuristic that has served us well.

The problem of epistemia, of course, is that with LLMs, this heuristic breaks. Fluency is now cheap. It is no longer a signal of competence. So, we are told we must rely on evidence-based reasoning only.

But I would argue that we can simply update our epistemic shortcuts. The philosopher Alvin Goldman proposed criteria for deciding which experts to trust, including their track record, their biases, and the agreement of other experts.

Perhaps, in the age of AI, we simply need to re-weight these criteria. We need to place less weight on the first criterion—plausibility and eloquence—because the machine has mastered that. We might even need to value simplicity over complexity, given the LLM's tendency toward verbosity.

What type of skills do we need in order to deal with a media landscape where LLMs will be very common? Do we adopt the concept of epistemia, implying we must become experts ourselves or blindly rely on others? Or do we update our strategies for assessing credibility in a context of reduced evidence? I favor the latter.

But I think there is a deeper, more subtle problem with the concept of epistemia. Accepting this view risks implying that text produced by an LLM has no value simply because it was generated by a thing with no knowledge. It suggests that the value of a text lies solely in the intent of the writer, not in the ideas it evokes in the reader.

Context is everything here. If I receive a text message from my wife, its value lies entirely in her intent: does she want to tell me she loves me, or does she need me to set the table?

But what if my goal is to improve my knowledge? If I am wrestling with a concept and an LLM provides a manipulation of text tokens that unlocks a new perspective for me, does the lack of a mind behind it matter? If the text helps me understand the world, where exactly is the problem?