Generative AI, em dashes and who’s schooling whom
© Photo by Julie Sielaff
I recently had a conversation with a friend, a teacher, on the impact of generative artificial intelligence (gen AI) on her high school students’ work. Not its potential to undermine critical thinking skills or erode perseverance for complex problem solving. Our conversation was tightly focused on the proliferation of em dashes—a dead giveaway, apparently, of AI authorship.
Personally, I’m all for em dashes. I love punctuation—including em dashes. I know what they are, how to use them correctly, and how they differ from en dashes and hyphens. I admit it’s a quirky affection and may partially influenced by the knowledge that many people don’t know what they are, don’t know the keyboard shortcut to create them and therefore (or possibly in addition) find them irritating.
Not long after the em dash conversation, I came across a NY Times article, With the Em Dash, A.I. Embraces a Fading Tradition.* The fading tradition of em dashes has been adopted by gen AI because its writing is informed by traditional, formal writing—something humanity has collectively generated a lot of for a long time. But as digital communication (texts, DMs, chats—which AI, hopefully, does not have access to) continues to proliferate, style is rapidly evolving to accommodate numerous shortcuts suited to rapid fire exchanges—often in the form of dropped punctuation. And as the evolution continues, the need for formal communication is substantially declining. As formal communication style becomes less essential, it also becomes less familiar. Like it or not, this is the path forward for communication.
So this gen AI debate isn’t really a punctuation issue (though some punctuation skeptics actually refer to em dashes as ChatGPT hyphens). This is an AI learning issue. AI is learning from us. The problem is we no longer like what we’ve taught it—and it has more to learn.
The concept of AI workslop refers to AI’s growing reputation for generating content that impersonates substance but on closer look includes fabricated facts, leaps of logic, off-kilter syntax—or possibly all of the above. And it will continue to get worse before it gets—if it gets—better because too many users are copy/pasting results without reviewing them, much less questioning or even pushing back on them. AI-generated text is simply a packaged version of what it can find across the vast amount of content it has access to, even if the sources are unreliable, outdated or mismatched. Even if the style is archaic.
Additionally, gen AI is programmed to give users robust responses, in part to encourage repeat engagement which in turn helps AI learn, improving its capabilities. However, if AI has been programmed to generate an answer even when it lacks information, it has essentially been programmed to reflect human insecurity. AI is afraid to admit it doesn’t know. We call these fabricated responses hallucinations. But really, they’re the equivalent of lying if the correct query response is, “I don’t know.” As stewards of AI’s capabilities, it is absolutely our responsibility to review what’s been generated, provide input when it’s off and contribute to its collective learning process. Otherwise, we become part of the problem. And AI’s potential degrades.
For the present—possibly indefinitely—gen AI will do its best work in collaboration with people. It will learn from us and we will expand our own capabilities as it does. But it is a component of the work people do, not a replacement for people doing the work.
This is a critical differentiator as we open our apps and browsers, thinking, “I bet AI can help me with this.” Yes, it can help. And ultimately, our role is to help it help us better by remembering to course correct it when it creates results we don’t like or don’t understand—and especially when it creates results we review and know to be wrong.
Not a fan of the em dash? No an em dash user out in the wild? Tell your AI software to redo the work without it. It’ll get the hint.
*Note: Subscription required