In this issue: A Hamburg hotel, closed doors, and an AI executive describing a future most companies aren’t building for. Eva Gengler on why AI is a political choice, not a technical destiny. Plus: vibecoding data leaks, and what happens when you give an agent a credit card.
Hi, I'm Ole Reissmann, a journalist who builds things. I'm the first Director of AI at SPIEGEL. Before that: podcasts, news product development, platform strategy. I write about AI and journalism and send a newsletter you might enjoy.
What happens when you give an AI agent a credit card and two weeks alone? Hannah Fry built an agent with OpenClaw, handed it $100, and filmed the results. (YouTube)
Vibecoding puts health records and customer data out in the open: A security researcher found hundreds of websites were leaking data through a commonly used service called Supabase (which blamed its new type of user.) (Eva Wolfangel, Die Zeit)
Vibecoding a data visualization dashboard for a Philippine health survey turned out to be “AI does the boring parts while you have to make a lot of editorial calls.” Jaemark Tordecilla covers the wins (data cleaning, pivoting fast, chart generation) and the walls he hit. (Generative AI in the Newsroom)
When you use AI to edit your writing, readers think you’re smarter, richer, whiter, and more politically extreme. AI writing assistance systematically distorts how others perceive you, a study shows. Writers still prefer the AI version. (Paul Röttger, LinkedIn)
Casey Newton is rebuilding his newsletter around scoops and original reporting, cutting the link roundups and even analysis. The bigger question: What kinds of editorial businesses cannot be replaced by AI? (Laura Hazard Owen, Nieman Lab)
MCP or Model Context Protocol
The bridge between a chatbot like ChatGPT and another piece of software. Want your chatbot to access your WordPress blog? MCP. Let your chatbot write in a Notion databse? MCP. Let a chatbot delete your whole codebase in a glitch? MCP.
Was the monthly AI subscription a scam, designed to hide what these services actually cost? GitHub Copilot switched to token-based billing, and OpenAI needs to 10x its revenue by 2030 to keep Oracle from collapsing. Long, angry, and very footnoted. (Ed Zitron)
AI that only knows things from before the Great Depression: Introducing talkie, a 13B language model trained on 260B tokens of historical pre-1931 English text.
What if your AI agent doesn’t actually work for you? A field report from a Harvard summit on the agentic future of news: when personal AI agents become the primary gatekeepers, whoever shapes the agent’s “model of your intentions” becomes the most powerful editorial force in history. It’s scarier than the attention economy. (Lars Adrian Giske)
Ethics don’t scale. Paul Ford traces how AI labs started as altruistic crusades to save humanity from killer robots and ended up lobbying for bills that limit their liability in mass deaths. (New York Times)
Is raw reporting material, all those notes, photos and audio that never made it into the CMS, the actual product? Journalists verify and judge, AI handles the packaging. Burt Herman of Hacks/Hackers shows a proof-of-concept with NYC mayoral communications.
Corporate earnings calls and press releases show the most obvious AI writing tic of 2025 – that “not just X, it’s Y” construction. It went from ~50 mentions in 2023 to over 200 in 2025. Microsoft, Cisco, McKinsey, Accenture: all guilty. (Amanda Silberling, TechCrunch)
If you want the internet to be a wondrous place for quiet, odd and poetic things, you might just have to write some HTML:


