Paul Ford (who wrote a legendary 38,000-word piece on “What Is Code”) on his new obsession with vibe coding: He’s building apps on the subway that he’d once have billed clients $350,000 for. “It stings to be made obsolete, but it’s fun to code on the train, too.” (New York Times)
I'm a journalist who builds stuff. At SPIEGEL, I launched podcasts, led teams, and shaped platform strategies. Now I'm into AI.
Is the image even real? Can we verify the facts?
Those questions framed the conversation at last Thursday's AI for Media Network gathering in Hamburg. 120+ representatives from media organizations and academia met to discuss AI in verification and research. It was the first time the event was hosted at SPIEGEL-Gruppe's Hamburg offices. Gerret von Nordheim, deputy head of SPIEGEL's fact-checking department, presented our in-house...
Links to a talk on trust and AI
Taken out of context without the talk at AI for Media somewhat incomprehensible. Google DeepMind: SynthID olereissmann.com: Hands on with SynthID Guardian: AI images of Maduro capture reap millions of views on social media Guardian: White House posts digitally altered image of woman arrested after ICE protest MDR: Sächsische Polizeigewerkschaft verteidigt KI-Bild Samira El Ouassil:...
Substack hosts, algorithmically promotes, and takes a revenue cut from newsletters openly pushing Nazi ideology, Holocaust denial, and white supremacy. (Geraldine McKelvie, The Guardian)
There’s no neat technical fix for that: The more useful an agent is, the more access it needs, and the more access it has, the riskier it gets. Yes, it’s about the Moltbot/OpenClaw agent craze, but also, it’s not. (Dan Hon, Things That Caught My Attention)
Moltbook, the viral “social network for bots,” looked like a glimpse of the AI agent future. It wasn’t. And some of it was humans shitposting. (Will Douglas Heaven, MIT Technology Review)
What is Claude? Anthropic doesn’t know either. Gideon Lewis-Kraus spent months inside the company, watching researchers try to understand their own AI. A good example of how to write about this technology without falling for the hype or waving it all away. (The New Yorker)
Why authenticity labels and AI watermarks are failing: Verge reporter Jess Weatherbed explains on the Decoder podcast why media authentication standards like C2PA are going nowhere. And why watermarking AI content isn’t working either.
Your AI agent is a snitch: We’re chatting on Signal, enjoying encryption, right? But your DIY productivity agent is piping the whole thing back to Anthropic. (John Scott-Railton, X)
A Finnish newsroom’s AI rules: Helsingin Sanomat’s guidelines (Esa Mäkinen, LinkedIn)
ChatGPT is getting ads, starting with US users: people on the free tier and on the $8 Go plan are being shown ads tailored to answers, in prominent photo boxes. If only OpenAI would link to journalistic content with the same enthusiasm. (Emma Roth, The Verge)
Can AI help crack the 1986 Olof Palme assassination? Anton Berg and Martin Johnson are using AI to reanalyze the huge archive of police material. Their podcast Spår follows the work (in Swedish).
Microsoft is launching a Publisher Content Marketplace to license articles into AI products like Copilot, promising publishers control, transparency, and usage-based payment for content that grounds conversational answers.
