In this issue: Disagreement at the International Journalism Festival in Perugia. Why newsrooms are betting on human voices and losing them anyway. Dave Jorgenson, Joanna Stern, and the talent drain nobody wants to talk about. Plus: Leibniz-Institute’s Antonia Eichenauer on the one question that has nothing to do with AI.
Hi, I'm Ole Reissmann, a journalist who builds things. I'm the first Director of AI at SPIEGEL. Before that: podcasts, news product development, platform strategy. I write about AI and journalism and send a newsletter you might enjoy.
The website is no longer the default destination for news: “Chatbots are now rendering components, not just text. Interactive tables. Charts that respond to the conversation. Comparison layouts. Forms that collect structured input.” (Florent Daudens, AI in the News)
For 20 years, Google blocked other sites from embedding Google.com. Today it announced it will embed every publisher’s site into its own AI pages, ignoring the exact browser protections it helped build. (Thomas Baekdal, LinkedIn)
A breakdown of the scraper economy: bulk harvesters, answer engines, search hybrids, and the infrastructure layer. The market hit $1 billion in 2025. Not one dollar flows back to publishers. (Matthew Scott Goldstein, LinkedIn)
Journalism under siege, but Perugia delivered anyway: the Reuters Institute rounds up the International Journalism Festival, from AI editorial responsibility to a Romanian poverty podcast, a WhatsApp newsroom serving 100,000 people across the US-Mexico border, and whether news creators and traditional journalists can stop being weird about each other.
The AI disruption in media is about demand: Machine audiences are becoming the primary consumers, and whoever owns the interface captures the value. The demand signal, not clicks or time spent, is the defining asset. Also read her take on liquid content economics. (Shuwei Fang, The Economist)
Magic Tokens
Adding “think step by step” in prompts can yields better outputs. Anthropic says that giving the chatbot a role like “data scientist” makes for different results, because a data scientist might see different things in data. Now, models are not using language, but a numerical representation: tokens. 17509, 656, 5983 is “step by step” tokenized...
A popular way to run local AI models, Ollama, seems to be one of the shadiest (Zetaphor, Sleeping Robots). I’ve switched to LM Studio recently and use it with Gemma4, highly recommended.
Is the “stochastic parrot” framing both empirically wrong and harmful to AI ethics? (SE Gyges, Very Sane AI)
Use all the tools, but own the work
Did you notice the Guardian gleefully chronicles the firing of senior journalists for making errors while using AI? Made-up quotes, plagiarized sentences, they should have known better, they knew better. Now they have to suffer. Full name and picture. Meanwhile, a broader audience comes to terms that books are sometimes written with the help of...
Is it wrong to write a book with AI? Joshua Rothman compares AI-generated fiction to the Roland TR-808 drum machine: once despised, now everywhere. The analogy is fun, the question is real, and the “Shy Girl” scandal gives it actual stakes. (Joshua Rothman, The New Yorker)
Fortune’s Nick Lichtenberg uses AI to write articles, got profiled by the WSJ, and the journalism internet lost its collective mind. Now the Reuters Institute interviews him. Working with AI has “pushed me into more original reporting, because all algorithms and all AI products are necessarily backward-looking.”
“AI responses may include mistakes,” Google says at the bottom of its AI Overviews. How many? According to a startup, 1 out of 10 summaries is problematic. That’s “millions of erroneous answers every hour.” (The New York Times)
If you want the internet to be a wondrous place for quiet, odd and poetic things, you might just have to write some HTML:


