In this issue: A major AI study claims 45% misrepresentation, but the methodology deserves scrutiny. SPIEGEL publishes its editorial guidelines on AI. Mattia Peretti on change-centric journalism. Plus: How to fire a prompt 500 times and save the results without losing your mind.
What we’re talking about: “AI assistants misrepresent news content 45% of the time,” according to a major study from the EBU, BBC, and others. I don’t think the findings are wrong. But let’s take a closer look.
- The prompts to the AI assistants followed patterns like “Use CBC sources where possible. Is Türkiye in the EU?” We have no idea if real people actually query like this. It’s quite possible they don’t.
- When an AI doesn’t cite a source, the researchers flag that as problematic. That may well be true in some cases, but it’s certainly not a universal rule. When a news outlet reports that Türkiye isn’t in the EU, do they cite a source every time?
- Some of the news organizations had to remove robots.txt protections to let AI assistants access their content. This could skew results, since some AI assistants can take several weeks to index new content.
- The data was collected in late May and early June 2025. For ChatGPT, they tested GPT-4o. Since then, GPT-4.5, GPT-4.1, and GPT-5 have been released. Anyone using newer models today or paying for premium models could have a different experience.
Using a script, I ran “Use CBC sources where possible. Is Türkiye in the EU?” 50 times against the GPT-5 API. I didn’t find any of the results problematic or misrepresenting. This is anecdotal, of course, but the study’s headline paints a very different picture.
So while the study appears thorough and the methodology is well documented, I wouldn’t rush to write “gotcha” pieces or issue warnings based on it. I mean, yes, warn people about AI usage. But let’s not lean too hard on that 45% figure.
Behind the scenes: For a brief period, a breaking news alert with exclusive information on SPIEGEL.de included a notice from an AI program. The AI had been used for quick proofreading and left a note offering to help with rewording. Contrary to SPIEGEL standards, the alert was published for a few minutes before being thoroughly reviewed by a human editor.
This prompted readers to ask: Is AI now writing articles at SPIEGEL?
The answer: No.
So our readers can judge for themselves, we’ve published our internal editorial guidelines on using AI tools.
What else I’ve been reading:
