Writing and editing are communication skills that AI can support but never replace
AI is great for abstraction and synthesis but it is a bin fire for the human voice. Here's how to manage it.
“The press release sounds off.”
“The LinkedIn post gives me the ick.”
“That email reads like it was written by a robot.”
The balance between AI and human writing is a live conversation in every agency and communications team. So called AI slop, low-quality content created by generative AI, is the sharp edge of AI use.
I use AI every day. It is brilliant for abstraction and synthesis. Point it at a research paper or transcript and it will generate a summary in seconds. Work that took a day now takes half an hour, but that is where the usefulness ends.
The problem starts when the text reaches a reader. The public has become quick to spot AI generated text. They can’t always say why but the reaction is palpable.
There is a flatness in the tone of voice and sense that the argument is going through the motions.
The acceptable use of AI in published writing is not settled. The current state of play is that if a reader decides a piece of content was written by a machine, you lose credibility and ultimately trust.
AI summarising a meeting is fine. AI drafting an internal report is okay. AI writing a press statement or content posted on social media crosses the line of acceptable use.
Means of disclosure are being negotiated in journalism, education and public relations. The one unifying factor is that they are all contested.
The AI writing giveaways
Here is the list I work from when I edit a draft. None of these are wrong in isolation, but in combination they signal that a peice of content was AI generated.
The em dash
The em dash is used and abused by AI in text. Swapping it for an en dash or a plain old hyphen does not save you. The frequency gives it away whatever character you use.
Contrastive negation
The structure defines an idea by saying what it is not, then what it actually is. “This is not a technical problem. It is a governance problem.” “Reputation is no longer about what you say. It is about what others find.”
The rule of three
Good writers group ideas in threes occasionally for rhythm. AI models do it in paragraph after paragraph until the prose plods.
Formulaic transitions
Moreover. Furthermore. In conclusion. These are unusual in natural writing but seemingly everywhere in AI output.
Telltale vocabulary
Delve. Underscore. Navigate. Tapestry. Realm. Pivotal. Landscape. AI models have a favoured palette of words and return to it repeatedly.
Summary paragraph
Models close by restating what they have just said. It reads as if the writer does not trust the reader to have followed the argument.
Flawless grammar
Sentences and paragraphs all the same length. All balanced and none ugly or off-kilter in the way human writing usually is.
Detection and editing tools will not save you
The instinct is to reach for an AI detector or editing tool to trap this stuff. It is a growing market. Applications such as Grammarly, GPTZero and Originality all promise to catch machine-written text.
In practice these tools flag human writing as AI. They are trying to solve a pattern-matching problem with pattern-matching tools. You can see the problem.
The tells I’ve listed cannot be machine-detected. Em dashes are correct punctuation. Contrastive negation is a normal rhetorical device. The rule of three is a rule for a reason.
Training data can help. I’ve more than a million words of my own writing that I’ve used to create a tone of voice custom GPT but it still generates many of the issues that I’ve highlighted no matter how hard I prompt.
I’ve just finished Enough Said, Alan Bennett’s latest diary. It is a wonderful life-affirming book that I highly recommend. You can ask AI to write in Bennett’s style. The output will look like his work but any lifelong Bennett fan will instantly recognise it as fake.
A machine cannot mimic his observation on the human condition. He is too candid, too irreverent and too human. The latest book carries a poignancy AI could never capture. It is almost certainly his last.
That is a human judgement. It needs a reader who knows what the prose is supposed to sound like. And a hardworking editor willing to do the unglamorous work of pulling the draft apart until it stops sounding like the machine.
Editing, not prompt engineering, is a growing role in practice. It is a rare but much needed human talent.
Workflow for editing AI-generated text
Any exercise in AI adoption should start with reviewing governance and setting out the rules of the road. What are the go and no-go areas?
Here’s the workflow that I have settled on working with agency and in-house clients.
AI earns its place in the areas of workflow that sit before the writing. It can also be useful for reviewing material. But putting words on a page and editing them is a human job.
Use a model for research and structure, but think hard about how you write. If you do use AI, edit hard. In my experience it’s slower than accepting an AI draft, but it is the difference between a draft that reads as yours and one that reads as machine generated.
The social etiquette of acceptable AI use will settle in this in time. The safest assumption for now is that your reader can tell and that the cost of being caught sounding like an AI model is probally higher than the time you saved by using one.
Further reading
This essay was originally posted on my Substack. The Wadds Inc. newsletter is read by more than 5,000 communications and public relations practitioners. We take a slower, critical perspective on the research, evidence and developments shaping the field.