/ / THEFUTURE /
In this issue: A major AI study claims 45% misrepresentation, but the methodology deserves scrutiny. SPIEGEL publishes its editorial guidelines on AI. Mattia Peretti on change-centric journalism. Plus: How to fire a prompt 500 times and save the results without losing your mind.
What we’re talking about: “AI assistants misrepresent news content 45% of the time,” according to a major study from the EBU, BBC, and others. I don’t think the findings are wrong. But let’s take a closer look.
- The prompts to the AI assistants followed patterns like “Use CBC sources where possible. Is Türkiye in the EU?” We have no idea if real people actually query like this. It’s quite possible they don’t.
- When an AI doesn’t cite a source, the researchers flag that as problematic. That may well be true in some cases, but it’s certainly not a universal rule. When a news outlet reports that Türkiye isn’t in the EU, do they cite a source every time?
- Some of the news organizations had to remove robots.txt protections to let AI assistants access their content. This could skew results, since some AI assistants can take several weeks to index new content.
- The data was collected in late May and early June 2025. For ChatGPT, they tested GPT-4o. Since then, GPT-4.5, GPT-4.1, and GPT-5 have been released. Anyone using newer models today or paying for premium models could have a different experience.
Using a script, I ran “Use CBC sources where possible. Is Türkiye in the EU?” 50 times against the GPT-5 API. I didn’t find any of the results problematic or misrepresenting. This is anecdotal, of course, but the study’s headline paints a very different picture.
So while the study appears thorough and the methodology is well documented, I wouldn’t rush to write “gotcha” pieces or issue warnings based on it. I mean, yes, warn people about AI usage. But let’s not lean too hard on that 45% figure.
Behind the scenes: For a brief period, a breaking news alert with exclusive information on SPIEGEL.de included a notice from an AI program. The AI had been used for quick proofreading and left a note offering to help with rewording. Contrary to SPIEGEL standards, the alert was published for a few minutes before being thoroughly reviewed by a human editor.
This prompted readers to ask: Is AI now writing articles at SPIEGEL?
The answer: No.
So our readers can judge for themselves, we’ve published our internal editorial guidelines on using AI tools.
What else I’ve been reading:
And now: Whenever I run into Mattia Peretti, usually at a journalism conference, I’m fascinated by how he thinks about our industry’s challenges with clarity and dedication, and above all, how he helps finding solutions.
Three Questions with Mattia Peretti
Three Questions with Mattia Peretti
Mattia Peretti is the founder of News Alchemists and works as an independent consultant.
What's on your mind lately?
“Journalism, done with integrity, can distinguish itself not by volume or virality, but by the quality of change it facilitates.” That’s Jazmín Acuña’s vision for Change-Centric Journalism, “a practice rooted in the pursuit of impact that improves the lives of people through care-based reporting, purposeful engagement and collective experiences that enable a democratic public life.” I love every bit of it.
What's one fact about AI that everyone should know?
Three years ago I wrote a list of 10 things you should know about AI in journalism. That was just a couple of months before the release of ChatGPT. Lots have changed since then, but I believe most of those ten things are still true.
What future are you looking forward to?
The future I advocate for every week in the News Alchemists newsletter: a future in which we put people’s needs and curiosity at the centre of everything we do: helping people navigate their lives, providing them with the information and the context they need to meaningfully participate in their communities, and strengthening democracy as a result.
Hands on: Say I want to send a prompt to an LLM 5, 50, or 500 times and save the responses in a table—how do I do that?
I have an AI write a script for me. No complicated app, no vibe coding. Just a simple Python script. Then I run that script, either on my computer or even easier: on someone else’s computer.
First, get an API key for the LLM you want to query. For that, you need an account and have to add payment information. At OpenAI here, at Claude here.
Ask Gemini (or another AI):
let's write a google colab notebook.
it should fires a certain prompt 50 times, and stores the result in a csv file.
do you have any questions for me?
Gemini asked which LLM I wanted. Then it generated the script and gave me an .ipynb file to download. I uploaded it to Google Colab (free) and was ready to go.
Though the script asks for the API key each time, which is a bit clunky. Since we never just paste keys directly into code, I stored my key securely using the key icon on the left and pulled it into the script via a variable:
from google.colab import userdata
auth_header_key = userdata.get('mein-key')
How to make the script even more useful? With AI’s help. And I’d almost argue this isn’t pure vibecoding, because at this level you can still easily follow the short code in a single file. Have fun experimenting!
This is THEFUTURE.
