The driver and the dolly — On AI, plagiarism, and the imaginary pact between writer and reader
If a writer uses AI to smooth a clunky sentence, have they betrayed you? What if they used it to draft a headline? Or a whole paragraph? The New York Times freelancer case is the occasion, but the real question is older — and starts with no longer seeing LLMs as alien intelligences.
Imagine a sober person gets in theyr car and hit badly a pedestrian on a crosswalk. Wouldn't it be weird if a newspaper ran the headline "Man arrested for driving a car"? I think we can all agree that the problem wasn't getting behind the wheel: it was hitting the person. Sure, if he hadn't driven he wouldn't have hit anyone, but that's not why the cops showed up.
The Guardian ran basically that strange headline, and nobody blinked because it involves an embarrassing case of plagiarism and artificial intelligence: The New York Times drops freelance journalist who used AI to write book review.
The freelancer — named and shamed as if this were some major public safety issue — did use an LLM (large language model, the kind of AI behind ChatGPT, Gemini, Claude, etc.) to write the review. But the actual problem is different: parts of his text are identical to a review of the same book previously published by the Guardian. Plagiarism, plain and simple.
If he hadn't used AI, he probably wouldn't have included those passages — or he would've noticed and cited them properly. It’s also possible the contrary: sometimes phrases lodge in your brain and you genuinely can't remember whether you read them somewhere or came up with them yourself. There's no way to know. But I think we can — and should — acknowledge that AI use is relevant here. But it’s like driving in the car accident example: the problem is the plagiarism, not the use of AI. And in fact it's not the reason the New York Times ended the collaboration. The paper doesn't ban AI: their ethics code only requires that substantial use be disclosed for transparency. And indeed the paper issued a correction, adding a link to the original Guardian review and faulting the freelancer not for using AI, but for his “reliance” on it.

The New York Times ethics code is the closest thing I can imagine to a "pact" between publisher/writer and reader. But it'd be naive to think readers' expectations stop at that code. So it's legitimate that — plagiarism aside — a review written with AI assistance might feel like a betrayal. Because for many people, the pact includes the expectation of reading the product of honest, maybe even painstaking, intellectual work.
Before going further, a caveat: I have no idea how that review was actually written. The verbatim quotes suggest sloppy, careless use of AI, but that's not certain. We don't even know how much AI was involved — assuming there's a metric for "how much AI is in this." Maybe the freelancer didn't even read the book and just fed a bunch of reviews into a chatbot; maybe he used AI to analyze those reviews and sharpen his own take, then used the same chat to rework some passages of his review; maybe they wrote a full draft himself and asked the AI to smooth out a couple of paragraphs; maybe he did everything on his own and only used AI to find synonyms or adjust the tone of a sentence.
So I don't want to focus on this specific case. I want to make a broader point — though still limited to argumentative writing (reviews, news articles, school papers…). And the first part of that broader point is this: LLMs often don't save you time.
Let me rephrase, since AI doesn't just fall from the sky but it's a tool we choose to use, at least for now, at least for writing articles: saving time (or writing more) isn't the only way to use it.
Sure, I can ask an LLM to write a thousand-word article on the history of space exploration. I'll get a decently written text in a fraction of the time it would take a human with the right knowledge and experience — let alone someone who doesn't write articles for a living or doesn't know much about space. And you could publish that article without even reading it. In fact, the AI could publish it automatically, choose the next topics, and essentially run an online magazine on its own.
You can do that. People do it. And I suspect the result, with the right customization, wouldn't even be that bad — roughly on par with an online outlet that pays writers a few bucks per article, or "in exposure" (and there are plenty of those). You can do it, people do it, but it doesn't make much sense: even with careful fine-tuning, it's hard to imagine a result significantly better than what anyone could get by just asking an LLM directly. If I want an AI-written article on the history of space exploration, I might as well ask the AI myself and tailor the result to my interests and knowledge.
There’s another reason not to do it: we've been dealing with information overload for a long time, well before generative AI came along. The problem isn't having information; it's filtering it. We need more quality, however you want to measure it, not more quantity. And I believe any use of AI that works solely on quantity is immoral: if you're producing content just because it's become nearly free to do so, you're not adding anything. You're contributing to the noise.

In the previous section I talked about quality meaning the quality of the text: how well it's written, how relevant the information is, and so on. But there's also the quality of the writing process itself. I'm not talking about pay, working hours, or benefits here — those matter, and I'll touch on them later. What I mean by "quality of the writing process" is how enjoyable the various stages of producing a text are for the writer.
I say "producing a text" because writing — in the sense of "putting one word after another where there were none before" — is just one phase. First you need an idea, then you develop it, do research, figure out how to turn it into an article, draft an outline. Then, once the first draft is done, you enter the virtually endless cycle of rereading and rewriting, then headlines, subheadings, pull quotes, images.
Say that using AI I end up with an article comparable to what I'd write without it, in roughly the same amount of time. But in a more relaxed way, because I've delegated the less enjoyable parts: drafting the first version, for instance, or identifying unclear passages, or the "post-production" work on headlines and subheadings. And "delegating" here doesn't (necessarily) mean trusting AI blindly — what the New York Times called 'reliance' in its apology. It means (it can mean) having a dialogue: I can ask it to suggest five headlines highlighting different aspects of the article and then pick the best one and tweak it; or ask how to sharpen a clunky sentence.
Then there's the specific, tactical advice: searching for synonyms when your idea is too vague for a thesaurus — which I still prefer when I need a precise definition of term — rephrasing sentences, suggesting examples, and so on.
Personally, I haven't noticed any time savings or increased productivity (in terms of sheer output). But less stress? Definitely.

So does any of this violate the pact between writer (or publisher) and reader? Keeping in mind that this pact is a fiction. A worse fiction, even, than the "original contract" natural law theorists once imagined — a mythical agreement in which free individuals supposedly handed over some of their freedom to form the State. Nobody ever signed such a contract, of course, and that was sort of the point: you could stuff into it whatever you wanted the State to be. Same thing here: you can stuff into the writer-reader pact whatever you want, including that texts must be written only on moonless nights with purple ink after an hour of meditation.
Obviously, with a pact like that in your head, you'll either feel constantly betrayed or read almost nothing. So let's look for some "reasonable" terms.
The first one that comes to mind is transparency. It's fair to expect that anyone writing an argumentative text is transparent about how and why they worked, what sources they used, why they chose this topic over another. But this expectation applies to relevant information: conflicts of interest, main sources, any commissioning party. It makes no sense to demand knowing what music they listened to while writing, or which words they looked up in the dictionary. Sure, AI isn't comparable to a dictionary — but in some circumstances it's comparable to the colleague you asked "hey, how does this sentence sound?" You can thank that colleague in the acknowledgments where appropriate, but it'd be a generic thanks. The thing is, if I write "thanks to X, Y, and Z for their help," nobody questions that I wrote the text. But if I write "I used Claude and Gemini," that doubt creeps in, even if Claude and Gemini’s role was less important that that of the colleagues.
The pact can — and maybe should — also cover the qualities of the text itself. I expect writing to be as clear and simple as possible: not dumbed down, but adapted to the context. Saying "atrial fibrillation" is perfectly fine in a medical journal, but on a general-interest site you risk not being understood if you don’t explain what you are talking about. But using some unnecessary and unexplained jargon you might sound like you know your stuff. Based on what I read out there, I'm afraid many people's version of the pact actually calls for obscure language for this aura of expertise.
I also expect to find relevant and true information — or rather, information the author believes to be true and has verified to the best of their ability. But I'm the first to enjoy loosely relevant digressions and to accept (or sometimes even hope for) a bit of fiction, if clearly flagged. I don't have strong expectations about originality, though: sure, a text needs distinctive qualities to be worth reading, but the obsessive pursuit of originality means avoiding all banalities. That might work in literature, but in nonfiction many true and useful things are also banal: eliminating a truth just because it’s banal could be a problem (this principle applies to this same sentence that is quite obvious, but I think it’s also useful).
These are my expectations — though I suspect, and regarding writing clarity I'm certain, they're not widely shared.
Should the pact also address how the text was produced, or is transparency about methods and conflicts of interest enough? I think transparency is enough: I don't see why I should dictate how someone works, as long as it has no direct consequences on the qualities of the text I outlined above. If I were to wish for anything in that pact, I'd look at working conditions: I expect writers to be paid fairly (or to freely choose to work for less, or for free). But oddly, in all the discussion about the AI plagiarism case, I haven't seen anyone ask how much the New York Times pays for a book review. I'd like to think that's because we're talking about a major media group and fair pay is assumed — but I wouldn't bet on it.
Here's the thing, though: those who want to read only the fruit of painstaking intellectual labor, and see AI as an unacceptable shortcut, are in fact asking for the opposite. Not in terms of fair pay, obviously — but in terms of work ethic. I can imagine certain uses of AI leading to worse or more homogenized writing: in that case, expecting a more artisanal approach makes sense. But if it simply makes the work less tedious for the writer, banning LLMs strikes me as the equivalent of demanding that the moving company carry your furniture on their backs instead of using a dolly.

That said, the New York Times freelancer did screw up. And when someone makes a mistake, we can do two things: blame them, or try to understand why they made the mistake — maybe learning something useful along the way.
What was the mistake? The New York Times spelled it out in its correction: failing to cite the original article — I think using other people's words is fine, if they're significant and properly attributed — and relying blindly on AI.
Why did it happen? One reason is certainly the freelancer's lack of preparedness in using generative AI. You need to know the tools you use: his responsibility is undeniable. But it's partial. These tools are developed and marketed in a specific way, and if people misuse them it's partly because they're led to. With LLMs, people are led to think they're dealing with something intelligent — after all, we call them artificial intelligence. Maybe not the best name, but there's no going back now.
The usual workaround is talking about "alien intelligences" that reason in radically different ways from humans. That can work: the idea of an uncommon intelligence invites me to study how this alien thinks and how we can integrate each other reasoning abilities. But it’s hard to understand how this alien intelligence works — because it’s not an alien intelligence and I struggle to imagine a planet where an intelligence like an LLM would survive more than a few minutes.
Maybe we should think of LLMs as a lossy text compression system — similar to what happens with JPEG images or MP3 audio files. In computing, lossy compression means reducing file size by discarding the least relevant information, so the result is close enough to the original to still be usable. When you save a photo as JPEG, the image comes out slightly less sharp, and at high compression levels you get visible artifacts. MP3 does the same with audio, dropping sounds our ears barely register. The key point: the lost information doesn't come back. At best, you can "invent" it in more or less plausible ways.
I got this idea from an article by Alex Reisner in The Atlantic: he explores its implications for copyright; for me it's more about how we use generative AI. An LLM has compressed all the texts it was trained on, plus whatever we add in our prompt, and returns them with varying degrees of fidelity. Sometimes it preserves the form, and you get a quote very close to the original — that's most likely what happened to the New York Times freelancer. Sometimes it changes the form while keeping the content intact, highlighting certain aspects over others, combining information: that's what happens when you ask an LLM to summarize a text or explain a concept differently. Other times it changes the content too, and you get the infamous hallucinations (or a form of creativity, depending on context) — like when an LLM invents a plausible quote nobody ever wrote, or attributes to a person events that never happened.
We shouldn't think of LLMs as alien intelligences that reason in strange ways. We should think of them as algorithms that have compressed texts and spit them back out, manipulating the originals in various ways. I believe starting from there can help us use generative AI better.