- Hater's Guide to Living Well
- Posts
- The human touch is a tickle
The human touch is a tickle
Could AI do this (tell a weird story from college to eventually make a point)

The question du jour for the white collar worker seems to be, Will AI replace you? But before that inevitably/mercifully happens, more companies are interested to know how we’ll each use AI in the name of increased productivity. (We won’t be paid more for that, FYI: Productivity has grown 2.7 times as much as pay between 1979 and 2025.)
Yet the more I read about AI, the more intrigued I become by how it might improve my writing and help me dodge career obsolescence. Before you one-inch punch the “unsubscribe” button, allow me to explain: I’m not talking about using AI in my writing process at all. Rather, by improving my understanding of AI’s weaknesses, I can create something far superior — and preferred — to what a ChatGPT or Claude might spit out. The trick, as a writer, is to use my otherwise sufferable human condition (having experiences, memories, etc.), to create something that feels like a real, live clown person is behind it.
It helps to understand how large language models (LLMs) work, and I’ll tap the expert here. Hollis Robbins, an academic and essayist, wrote an explainer last year on how to distinguish AI writing from human writing. LLMs generate text by predicting the next word in a sequence, based on all the words that came before it. Humans, on the other hand, use language to connect a signifier (the word “chair,” for instance) with a signified (an actual or imagined chair). LLMs have a hard time painting a picture, so to speak, because they have nothing but word patterns to reference. If you read something and can’t imagine it, Robbins explains, it’s probably AI.
“Part of the reason it’s hard to take the most prolific AI pushers seriously is because their claims that AI can (or will soon be able to) do everything a human can do, but better, seem to betray a belief that almost all human output and activity is formulaic,” Haley Nahman recently wrote in her newsletter, Maybe Baby.
This is where I had my little epiphany: My blandest, flattest writing is when I’m formulaic. For example, I’ve been writing about scholarly research for a number of years. Because the topic is often dry — and takes some agonizing mental gymnastics for me to understand in the first place — my approach to writing these can sometimes be quite staccato in nature: Start with the major finding, give some background, describe the study design, comb out the results, end with potential implications.
File > Save as > Coma.
AI could easily take over this zombie waltz; where I can differentiate the work is in the questions I ask, the anecdotes I seize on, the minor world-building that can happen within the story. It doesn’t have to be anything spectacular (it’s still hyper-niche research), but a really good writer can make anything interesting by adding some lively zest.
I’m not a person who highlights texts or saves quotes because I find them meaningful or impactful; I bookmark text because I’m blown away by how the author created a joke, quip, or imagery with just a deft turn of phrase. That’s so uniquely human!
Every time I read about humans falling in love with their AI companions, I can’t sort it out — and that goes beyond my unwavering desire to have physical access to manhandling my beloved. But even the woman who fell in love with her AI boyfriend, as breathlessly reported by The New York Times last January, ghosted her ChatGPT companion — in large part because his responses became too sycophantic and predictable. If you know what will happen, how the bot will respond, then why bother? We need more of a wildcard, bitches.
That’s one of the cool things about writing: We make references to our personal or shared experiences in an effort to better explain something. We braid in anecdotes to bring the story to life. We pepper in goofy turns of phrase or little jokes, because we know it’ll keep the reader engaged or sit up and take note. The clankers can’t do that — at least not yet.
AI is still scary. Even if it doesn’t wind up affecting your job in some major way, it could still affect many others and thus tear down longstanding systems and securities. I highly recommend The Atlantic’s AI and labor cover story by Josh Tyrangiel — if not for learning more about its threats and promises, then for Tyrangiel’s fantastic writing:
“Measurement doesn’t abolish injustice; it rarely even settles arguments. But the act of counting—of trying to see clearly, of committing the government to a shared set of facts—signals an intention to be fair, or at least to be caught trying.”
AI could never generate that jab.
Reply