Don’t.
At least not when AI is only used as an auxiliary tool, say in producing an article. When a journalist has used ChatGPT to structure their notes. When an AI proofreads and suggests improvements. Whenever a journalist decides not to use AI as a shortcut, but as a tool. When a journalist remains in charge.
Don’t get me wrong—if we’re talking about AI cranking out entire articles or making editorial calls: Obviously, slap labels on it, mark it clearly.
But here’s the thing: transparency is tricky. Katharina Schell, deputy editor-in-chief of Austria’s APA news agency, has researched this question at the Reuters Institute in Oxford. She says:
“If we only label ‘AI’, we obscure the view of the journalistic process. Highlighting only AI would marginalise journalism as an afterthought. But we stand by our journalism – whether AI was involved or not! We should convince users of that.”
Most people don’t have a clue how journalism actually works. Throw an “AI-generated” tag on an article, and they’ll assume ChatGPT spat it out in 4 seconds flat. Cheap, lazy, untrustworthy. Studies back this up: people are already skeptical.
Schell’s argument? Be pragmatic. Her report is absolutely worth reading.
This brings me to Business Insider. Their internal guidelines recently made waves for being pretty chill about AI use. Journalists can use AI to draft content—no public disclosure required. Then there is this:
“Make sure your final work is yours.”
To which I say: exactly. That’s the whole point.
Ultimately, it’s about trust. For data investigations, it makes sense to publish datasets where possible and disclose methodology. Were texts classified? If so, with which prompts and models? Here journalism can take cues from scientific papers.
But this moment—this ChatGPT-everywhere, AI-in-every-corner moment—isn’t about slapping labels on things and calling it a day. It’s a chance to double down on trust. To show people how journalism actually works. Not with lofty mission statements, but through consistent, transparent storytelling.
What do journalists actually do? How do they make decisions? These are the questions we should be answering.
The real task isn’t to plaster “AI-assisted” on every article. It’s to stop readers from jumping to “Ugh, cheap AI content” the second they see a label. It’s to avoid the ridiculousness of explaining that ChatGPT helped find a better synonym for “however.”
If we feel uncomfortable about how much AI intrudes into our work and want to distance ourselves from it with a disclaimer, we’ve got bigger issues to deal with. Something’s going wrong.