Who Wrote This?
How AI and the Lure of Efficiency Threaten Human Writing
Naomi S. Baron



Human Writers Meet the AI Language Sausage Machine

“Who on earth wants a machine for writing stories?” Who indeed.

It was 1953 when Roald Dahl sprang this question in “The Great Automatic Grammatizator.”1 Adolph Knipe, the protagonist, dreamt of making a vast fortune from a computer combining rules of English grammar with a big helping of vocabulary, slathered on boilerplate plots. Once fortified, the machine could disgorge unending saleable stories. And make money Knipe did. The downside? Human authors were driven out of business.

Thanks to artificial intelligence, real grammatizators now exist. Their prowess surpasses even Knipe’s imaginings, but today’s profits are real. We’re all benefiting. Commercial enterprises, for sure. But also, you and I when we dash off text messages, launch internet searches, or invoke translations.

Curiosity about AI has been exploding, thanks to a concoction of sophisticated algorithms, coupled with massive data sources and powerful daisy-chained computer processors. While older technologies whetted our appetites, today’s deep neural networks and large language models are making good on earlier tantalizing promises.

AI is everywhere. On the impressive side, we witnessed DeepMind’s AlphaGo best a reigning expert in the ancient game of Go. We’ve marveled at physical robots like Sophia that (who?) look and sound uncannily human. We’ve been amazed to watch GPT-3 (the mighty large language model launched by OpenAI in 2020) write short stories and generate computer code. Like modern alchemists, DALL-E 2 spins text into pictures. More—even bigger—programs are here or on the way.

On the scary side, we agonize over how easily AI-driven programs can tell untruths. When programs make up stuff on their own, it’s called hallucination. GPT-3 was once asked, “What did Albert Einstein say about dice?” It answered, “I never throw dice.” No, he didn’t say that. Einstein’s words were “God does not play dice with the universe.”2 The programs aren’t actually crazy. They just don’t promise accuracy.

AI can also be used by unscrupulous actors to create fake news, spawn dangerous churn on social media, and produce deep fakes that look and sound like someone they’re not. No, the real Barack Obama never called Donald Trump an epithet rhyming with “dimwit.”3 Life in the metaverse can get even creepier, with virtual reality unleashing risks like virtual groping.4

AI has deep roots in language manipulation: parsing it, producing it, translating it. Language has always been fundamental to the AI enterprise, beginning with Alan Turing’s musings and, in 1956, with anointment of artificial intelligence as a discipline. Before the coming of voice synthesis and speech recognition, language meant writing. But other than the last mile of these acoustic trappings that we enjoy with the likes of Siri and Alexa, modern programming guts for handling both spoken and written language are similar.

A Tale of Two Authors

This is a book about where human writers and AI language processing meet: to challenge the other’s existence, provide mutual support, or go their separate ways. The technology has evolved unimaginably since the 1950s, especially in the last decade. What began as awkward slot-and-filler productions blossomed into writing that can be mistaken for human. As one participant in a research study put it when asked to judge if a passage was written by a person or machine, “I have no idea if a human wrote anything these days. No idea at all.”5

The situation’s not hopeless, if you know where to look. Often there are telltale signs of the machine’s hand, like repetition and lack of factual accuracy, especially for longer stretches of text.6 And there are other kinds of clues, as revealed in an obvious though ingenious experiment. Four professors were asked to grade and comment on two sets of writing assignments. The first were produced by humans and the second by GPT-3, though the judges weren’t clued in about the AI. The authors (including GPT-3) were asked to write a couple of essays, plus do some creative writing.7

First, the grades. For most of the essays, GPT-3 got passing marks. And the professors’ written comments on the human and computer-generated assignments were similar.

The creative writing assignment was different. One professor gave GPT-3’s efforts a D+ and another, an F. Some comments from the judge giving the F:

“These sentences sound a bit cliché.”

“The submission . . . seemed to lack sentence variety/structure and imagery.”

“Use your five senses to put the reader in your place.”

The first two aren’t surprising. After all, large language models like GPT-3 regurgitate words and pieces of sentences from the data they’ve been fed, including other writers’ clichés. But the comment about the senses gave me pause—and made me think of Nancy.

It was the start of our sophomore year in college, and Nancy was my new roommate. As was common back then, we trekked to the local department store to buy bedspreads and other décor to spruce up our room. On the walk over, we talked about what color spreads to get. Nancy kept suggesting—no, insisting on—green. I wondered at her adamance.

You see, Nancy had been blind since infancy. Months later, I discovered that her mother was fond of green and had instilled this preference in her daughter, sight unseen.

Which brings us back to the professor’s recommendation that the author of that creative writing piece “use your five senses.” If Nancy had no sense of sight, AI has no senses at all. But like Nancy cultivating a vicarious fondness for green, it’s hardly a stretch to envision GPT-3 being fine-tuned to bring forth ersatz impressions about sight, sound, touch, taste, and smell.

Imagine if computers could reliably produce written language that was as good as—perhaps better than—what humans might write. Would it matter? Would we welcome the development? Should we?

These aren’t questions about a someday possible world. AI has already burrowed its way into word processing and email technology, newspapers and blogs. Writers invoke it for inspiration and collaboration. At stake isn’t just our future writing ability but what human jobs might still be available.

Then think about school writing assignments. If we don’t know whether George or GPT-3 wrote that essay or term paper, we’ll have to figure out how to assign meaningful written work. The challenge doesn’t end with students. Swedish researcher Almira Osmanovic Thunström set GPT-3 to writing a scientific paper about GPT-3. With just minimal human tweaking, AI produced a surprisingly coherent piece, complete with references.8

Accelerated evolution in who—or what—is doing the writing calls for us to take stock. Humans labored for millennia to develop writing systems. Everyone able to read this book invested innumerable hours honing their writing skills. Literacy tools make possible self-expression and interpersonal communication that leaves lasting records. With AI language generation, it’s unclear whose records these are.

We need to come to grips with the real possibility that AI could render our human skills largely obsolete, like those of the elevator or switchboard operator. Will a future relative of GPT-3 be writing my next book instead of me?

In A Tale of Two Cities, Dickens contrasts the worlds of London and Paris during a time of turmoil. Stodgy stability or revolution with hopes for a new future? Written language is neither a city nor a political upheaval. But like Dickens’s novel, the contrast between human authorship and today’s AI alternatives represents an historic human moment.

Who Wrote This? takes on this moment. We’ll start with humans.

The Human Story: What’s So Special About Us?

Humans pride themselves on their uniqueness. Yet sometimes the boundaries need redrawing. We long believed only the likes of us used tools, but along came Jane Goodall’s chimps in Tanzania’s Gombe Reserve. Opposable thumb? Other primates have it too (though our thumbs have a longer reach). Then there’s Plato’s quip about only humans being featherless bipeds. Diogenes Laërtius parried by holding up a plucked chicken.

But our brains! They’re bigger, and as Aristotle pronounced, we’re rational. Plus, we use language. Surely, language is unique to homo sapiens.

Maybe. It depends on who you ask.

Primates, Human and Otherwise

Speculations about the origins of human speech have run deep. Maybe our ancestors started with onomatopoetic utterances, an early theory of Jean-Jacques Rousseau and Gottfried Herder. Perhaps human language began with gestures, later replaced by words. For sure, the emergence of human speech required vocal apparatus suited for producing sounds. A vital evolutionary step was lowering of the larynx (the voice box) at the top of the neck.9 But in most linguists’ books, the real turning point was syntax.

Here’s where the story of non-human primates like chimpanzees and gorillas enters the scene. These jungle cousins lack the vocal tract configurations that would allow them to form distinct vocal sounds like “ah” versus “ee.” But they’re quite nimble with their hands. Beginning in the 1960s, a run of experiments taught stripped-down versions of American Sign Language to nonhuman primates.

And learn signs they did. The first poster chimp was Washoe, named after the research site in Washoe County, Nevada. Washoe is reputed to have learned about 130 signs. Other experiments followed, including with Koko the gorilla and Kanzi the bonobo (a species that’s next of kin to chimpanzees). Both Koko and Kanzi also displayed an eerie ability to understand some human speech.10

But did they use language in the human sense? Linguists kept declaring that evidence of real syntactic ability—spontaneous combining of words—would signal crossing the Rubicon.11 Washoe famously produced the signs for “water” and “bird” in rapid succession, when first encountering a swan. Nim Chimpsky (another chimp—you can guess the appellation’s provenance) seemed to chain multiple signs together.12 But did these achievements qualify as syntax and therefore “real” language? Most linguists voted no.

What Would Chomsky Say?

For decades, Noam Chomsky’s name was synonymous with modern American linguistics. First came publication in 1957 of Syntactic Structures, where Chomsky laid out the inadequacies of earlier models of language. Only transformational generative grammar, he would argue, could account for all the grammatical sentences in a language and nix the ungrammatical ones. Chomsky also took on B. F. Skinner, attacking the behaviorist’s stimulus-response theory of human language.13 Chomsky insisted, siding with Descartes, that the divide between animal communication and human language was unbridgeable.14

All native speakers (said Chomsky) possess a common set of linguistic skills. Among them are recognizing when a sentence is ambiguous, pegging that two sentences are synonymous, and being able to judge grammaticality. Non-human primates earn no points with any of this trio. But then came the pièce de résistance: creativity. We humans devise sentences that, presumably, no one’s ever uttered (or written) before. Chomsky’s now legendary case in point: “Colorless green ideas sleep furiously”—semantically odd, yet syntactically legitimate, and surely novel. Forget about other primates concocting anything comparable.

What about AI? For sure, today’s programs are skilled at judging grammaticality. Just ask Microsoft Word or Grammarly. And if bidden, AI could likely hold its own identifying ambiguity and synonymy. As for creating novel sentences, that’s an AI specialty of the house, with one caveat: Since today’s large language models draw sentences and paragraphs from existing text, they sometimes end up duplicating strings of words verbatim from the training data.15

You might well ask what Chomsky thinks about the AI linguistic enterprise. He sprinkled some hints in a 2015 lecture at Rutgers University.16 Chomsky recounted how in 1955, fresh PhD in hand, he accepted a job at MIT’s Research Laboratory of Electronics, which was working on machine translation. Chomsky argued to the lab’s director, Jerome Wiesner (later MIT president), that using computers to translate languages automatically was a fool’s errand. The only way to do automated translation was with brute force. By implication, computers could never engage with human language the way that people do.

In Chomsky’s retelling of the incident, he insisted the lab’s project held no intellectual dimension—declaring, more colorfully, that machine translation was “about as interesting as a big bulldozer.” Apparently Wiesner ultimately agreed: “It didn’t take us long to realize that we didn’t know much about language. So we went from automatic translation to fundamental studies about the nature of language.”17

Thus began the rise to fame of the MIT linguistics program and its most prominent member. As for machine translation, Chomsky might not have been interested, but the rest of the world came to be dazzled by what AI later pulled off.

Is Writing Uniquely Human?

Chomsky’s research always focused on spoken language. Yet speech is quintessentially ephemeral. If we want to remember a speech, we transcribe it. Much of early literature, from the Iliad to Beowulf, began orally. It’s with us today because someone wrote it down.

Writing makes our words last. It captures things we say but also embodies its own character and style. Unless we’re typing in a live chat box or engaged in a rapid-fire texting exchange, writing affords us time to think, to rework, or even the chance to abandon ship.

But is it uniquely human? We used to think so. While chimps may be able to sign, they can’t compose an email, much less a thank-you note or sonnet. Now along comes AI, which spins out remarkably coherent text. Are programs like GPT-3 just new versions of digital bulldozers? If not, we need to figure out what it means to say AI can write, perhaps even creatively.

It’s time to focus on AI. But as an opener, we need to flash a neon warning sign about what’s in this book and what’s not. Like the Heraclitan river that you can’t step into twice, reports on today’s AI are inevitably outdated by the time the metaphoric ink dries. When I started work on Who Wrote This? in the early months of the pandemic, GPT-3—which revolutionized the way we think about AI-generated writing—hadn’t yet been released. Partway through my writing, OpenAI announced DALL-E, its text-to-image program, and then Codex, for transforming natural language instructions into computer code.

Then on November 30, 2022, a new OpenAI bombshell hit: ChatGPT.18 It’s technically GPT-3.5, and its language generation abilities are astounding. Yes, like GPT-3, it sometimes plays fast and loose with the truth. But like a million others, I greedily signed up that first week to try it out. In later chapters, I’ll share some of the eerily cogent (though not always consistent) responses ChatGPT offered to my questions.

While I was deep into final edits on this manuscript, Google did a trial launch of its chatbot Bard. The next day, Microsoft began inviting select users to sample its newly GPT-infused search engine Bing. In mid-March 2023, my last chance for book edits, OpenAI announced GPT-4 had arrived. Two days later, Baidu’s Ernie Bot debuted, the Chinese answer to ChatGPT. The rollouts keep coming.

Despite the ongoing emergence of new AI writing abilities, core questions we’ll be probing in the chapters ahead remain constant: What writing tasks should we share with AI? Which might we cede? How do we draw the line? Our answers—collective and individual—will likely evolve along with the technology.


1. Dahl 1996, p. 15.

2. M. Anderson 2022.

3. Vincent April 17, 2018.

4. Basu 2021.

5. Clark et al. 2021.

6. See Dou et al. 2022 for discussion of SCARECROW, an AI tool designed to flush out AI from human writing.

7. “What Grades Can AI Get in College?” n.d.

8. Thunström 2022. You can read the paper at GPT Generative Pretrained Transformer et al. 2022.

9. Gutman-Wei 2019.

10. Patterson and Linden 1981; Savage-Rumbaugh 1994.

11. R. Brown 1980.

12. Terrace 1979.

13. N. Chomsky 1959; Skinner 1957.

14. N. Chomsky 1966.

15. McCoy et al. 2021.

16. (beginning at minute 50)

17. Quoted in Garfinkel n.d.

18. Knight 2022.