The Human Skill That Eludes AI
Stay on top of this story
Follow the names and topics behind it.
Add this story's key topics to your watchlist so LyscoNews can highlight related developments and future matches.
Create a free account to sync your watchlist, saved stories, and alerts across devices.
Quick Summary
In a certain, strange way, generative AI peaked with OpenAI’s GPT-2 seven years ago. Little known to anyone outside of tech circles, GPT-2 excelled at producing unexpected answers. It was creative. “You could be like, ‘Continue this story: The man decided to take a shower,’ and GPT-2 would be like, ‘And in the shower, he was eating his lemon and thinking about his wife,’” Katy Gero, a poet and computer scientist who has been experimenting with language models since 2017, told me. “The models won’t do that anymore.” AI leaders boast about their models’ superhuman technical abilities. The technology can predict protein structures, create realistic videos, and build apps with a single prompt. But these executives and researchers also readily admit that they have not yet released a model that writes well. OpenAI CEO Sam Altman has predicted that large language models will soon be capable of “fixing the climate, establishing a space colony, and the discovery of all of physics,” but in an October interview with the economist Tyler Cowen, he guessed that even future models—an eventual GPT-6 or GPT-7—might be able to extrude only something equivalent to “a real poet’s okay poem.” Today’s AI-generated prose is riddled with flaws. Chatbots produce meaningless metaphors, endless “it’s not this, but that” constructions, and a cloyingly sycophantic tone—and, of course, they overuse my beloved em dash. (Only starting with GPT-5.1, released in November, could ChatGPT reliably follow instructions to avoid the beleaguered punctuation mark.) I wanted to understand why this is—why large language models, which, after all, have memorized centuries of great literature, can demonstrate incredible emergent abilities yet totally fail to produce a single essay that I’d want to read. [Read: Would limitlessness make us better writers?] So I talked with people who would know: people who work at LLM companies, AI-data vendors, academic computer-science departments, and AI-writing start-ups. (Some spoke with me under the condition of anonymity because their employers barred them from speaking publicly about their work.) What I learned is that modern LLMs are built in a way that is antagonistic to great writing; they are engineered to be rule-following teacher’s pets that always have the right answer in hand. In many respects, they’ve come a long way from GPT-2, but they’ve also lost something that made them looser and more compelling. LLMs begin their lives as indiscriminate readers. During the pretraining phase, they ingest something like the entire internet—Reddit posts, YouTube transcripts, SEO sludge—and compress it into patterns. Most writing is not very good. But the quantity, not the quality, of these data is what matters. Pretraining teaches AIs grammar rules and word associations, enabling what is known as “next-token prediction”: the process through which models determine which part of a word follows another, over and over and over again. Rough edges are then sanded down in the post-training phase. This is when LLM companies define the ideal “character” for an AI model (such as being “helpful, honest, and harmless”), give the AIs example dialogues to learn from, and apply safety filters that attempt to block illegal requests. Through processes such as “reinforcement learning with human feedback,” which enlists people to grade AI outputs against a rubric, models are guided toward responses that exemplify desired traits. [Read: AI’s memorization crisis] AI research is an empirical science—people can verify when something works and make tweaks when something doesn’t. But art resists rules and quantification. No objective measurement exists to prove whether Pablo Neruda’s work is better than Gabriela Mistral’s. Novice writers learn conventions; great writers invent them. An LLM trained to imitate taste can go only so far. On some level, AI engineers and researchers must know this. Even as they try (and fail) to automate this work, many of the people I spoke with clearly revere good writing. “Writing novels is one of the most intense cognitive activities a human can do,” James Yu, a co-founder of Sudowrite, an AI assistant for fiction authors, told me. My sources’ faces lit up when I asked about their favorite books—three cited the science-fiction author Ted Chiang, though they also seemed disheartened that he has become a vocal critic of generative AI. The difficulty of evaluating writing does not prevent AI labs from trying. They are motivated in part by a question that came up again and again in my interviews: If LLMs can’t write mind-bending essays or poignant sonnets, are they generally intelligent at all? And so labs try to assess AI writing through various criteria. Post-training teams vibe-check model outputs themselves based on personal taste, and companies contract with domain experts to receive feedback on model-produced writing. A job listing for a “creative writing specialist” at xAI lists “novel sales >50,000 units” and “starred reviews in Kirkus” among its requirements (rates start at $40 an hour). I interviewed two people who have recently worked with large AI labs as a writing evaluator. The first, a contractor at Scale AI, described firsthand the absurdities of the task: To transform something as slippery as “tone” into discrete criteria, rubrics included rules such as “The response should use a maximum of two exclamation marks.” The contractor told me that “there were numerous cases where even though it felt like B was a better response overall, you ended up rating ‘I prefer A’ because it had three exclamation points.” He said that another time, he was asked to grade fan fiction on its “factuality.” [Read: The future of writing is a lot like hip-hop] The second person I spoke with is an author who worked directly with a frontier lab’s technical-research team. The company frequently asked him to break down the specific elements that make a piece of literature great. “It’s completely non-tractable to that kind of thinking,” he told me. He pointed to the example of English sonnets: They are technically one of the most templated forms, but just because a sonnet contains 14 lines and is written in iambic pentameter does not make it good. “Even when Shakespeare is being very structured, he’s constantly trying not to follow the rubric, or to subvert it, or reinvent it. I don’t know what it is that makes the difference between the poet who writes by rote and Shakespeare. I just know that the two can never be confused.” So are the LLMs doomed to produce sophomoric prose forever? One theory is that this is simply a matter of prioritization. In some ways, creativity is directly at odds with AI companies’ other objectives. Generally, chatbots are trained to avoid misinformation, political bias, child-sexual-abuse material, copyright violations, and more. They are also scored on benchmarks such as SWE-bench (for coding tasks) and GPQA (the natural sciences), which dramatically shape public perception of which company is winning the race. And if most users are using ChatGPT to draft corporate emails, bold text and brief bullet points may be exactly what they want. “The more you control for these” traits, Nathan Lambert, a post-training lead at the Allen Institute for AI, told me, “the more you suppress creativity.” When you tell a model to be a brilliant prose stylist, but also a Ph.D.-level mathematician, and also strictly PG-13, it will become rigid and tight-lipped, like a nervous candidate at a job interview terrified to misstep. The same whimsicality that made GPT-2’s voice fresh also made it prone to other unpredictable behavior. “If you’re a big corporation like Google or OpenAI, you want a chatbot that’s going to make money. The chatbot that’s not going to make you money is the one that’s a weirdo,” Gero said. [Read: The great language flattening] I began to hypothesize that AIs might be able to generate award-winning literary prose if only we unhobbled them from the strictures of the post-training process and built specialized writing models instead. But as I reflected on the authors I love most, that didn’t seem right either. When a practiced human writer reaches for a particular turn of phrase, they aren’t aiming for some single standard of great writing. Rather, the best metaphors come from the author’s specific blend of experiences or expertise. A writer’s diction, their citations, and the stories they share all reflect a singular, irreplicable perspective. Authorial voice emerges from the specificity of a life. The models—although technically proficient and grammatically pristine—cannot live, cannot feel, cannot smell, cannot taste, cannot sense. They cannot spill raw emotions onto the page, or place abstract concepts in rich physical settings. Close readers of AI writing will notice that the metaphors are uncanny: LLMs assign weekdays tastes and give mirrors seams. They generally seem terrified of biology: They do not like to speak, even metaphorically, about blood and sex and death. Their output lacks stakes, as a creative-writing instructor might say. Although Yu is impressed by the technical leaps that LLMs have made since GPT-2, even he won’t read fully AI-generated stories. I asked him what’s still missing for AI to produce a great novel on its own. Yu paused for a second, then answered: “Most people’s good first stories are autobiographical. Maybe you need a model that lives a life, and can almost die.” LLMs may never be capable of great writing themselves. But this doesn’t mean that they can’t help humans. Recently, I turned AI into an editor. Not for this article—The Atlantic’s editors are all human—but for a couple of essays that I wrote on my personal Substack. My philosophy is that I should provide the prose and perspective, and AI should supply feedback—encouraging me to write more like myself. First, I fed the chatbot Claude an archive of my past writing, along with notes about what worked and didn’t about each piece. I used this to create a custom editing rubric based on my voice. Some criteria are generic, and others are personalized: One reads, “Does this play to your insider-anthropologist position” in Silicon Valley? Another asks whether the thesis shows up in the first 500 words. I dumped this guidance into a Claude project along with a reminder of its role: “You are not a co-writer. You cannot perceive. Your role is to help Jasmine write like the best version of herself.” I don’t want to be de-skilled, I reminded the machine. Your only job is to make me smarter. [Read: Why so many people are seduced by ChatGPT] This AI editor has become a valuable part of my process. Like any reader, it’s not always right. I am careful not to let it trap me into one narrow stylistic lane. But Claude pushes me to iterate and improve faster than I could alone, pointing out where my execution failed to meet the standards of my own taste. “Stop trying to write the ending as a thesis and write it as a scene,” it told me while editing a recent post. There’s something slightly humiliating about having your efforts rejected by a bot, but I had to admit that its critique was fair. I redrafted the conclusion four times. And then, finally, Claude approved.