Time put “The People vs. AI” on its cover and profiled nine Americans fighting data centers, chatbot harms, and AI in hospitals. A companion essay argues AI policy has left the wonk phase and entered kitchen-table politics, but neither party in the U.S. knows what to say about it yet. (Andrew R. Chow / Rebecca Lissner)
AI & Journalism Links
“In 2026, it’s a scary time to work for a living.” That’s how the Guardian launches Reworked, a yearlong series on AI and the future of the job. The same technology that’s making software engineers nervous is making them realize they have more in common with warehouse workers than with their CEOs. (Samantha Oltman)
Just send the prompt twice? A new paper argues that repeating helps non-reasoning models. There’s a catch: The models tested (4o, Claude 3.7) are retired by now.
Paul Ford (who wrote a legendary 38,000-word piece on “What Is Code”) on his new obsession with vibe coding: He’s building apps on the subway that he’d once have billed clients $350,000 for. “It stings to be made obsolete, but it’s fun to code on the train, too.” (New York Times)
Substack hosts, algorithmically promotes, and takes a revenue cut from newsletters openly pushing Nazi ideology, Holocaust denial, and white supremacy. (Geraldine McKelvie, The Guardian)
There’s no neat technical fix for that: The more useful an agent is, the more access it needs, and the more access it has, the riskier it gets. Yes, it’s about the Moltbot/OpenClaw agent craze, but also, it’s not. (Dan Hon, Things That Caught My Attention)
Moltbook, the viral “social network for bots,” looked like a glimpse of the AI agent future. It wasn’t. And some of it was humans shitposting. (Will Douglas Heaven, MIT Technology Review)
What is Claude? Anthropic doesn’t know either. Gideon Lewis-Kraus spent months inside the company, watching researchers try to understand their own AI. A good example of how to write about this technology without falling for the hype or waving it all away. (The New Yorker)
Why authenticity labels and AI watermarks are failing: Verge reporter Jess Weatherbed explains on the Decoder podcast why media authentication standards like C2PA are going nowhere. And why watermarking AI content isn’t working either.
Your AI agent is a snitch: We’re chatting on Signal, enjoying encryption, right? But your DIY productivity agent is piping the whole thing back to Anthropic. (John Scott-Railton, X)
A Finnish newsroom’s AI rules: Helsingin Sanomat’s guidelines (Esa Mäkinen, LinkedIn)
ChatGPT is getting ads, starting with US users: people on the free tier and on the $8 Go plan are being shown ads tailored to answers, in prominent photo boxes. If only OpenAI would link to journalistic content with the same enthusiasm. (Emma Roth, The Verge)
Can AI help crack the 1986 Olof Palme assassination? Anton Berg and Martin Johnson are using AI to reanalyze the huge archive of police material. Their podcast Spår follows the work (in Swedish).
Microsoft is launching a Publisher Content Marketplace to license articles into AI products like Copilot, promising publishers control, transparency, and usage-based payment for content that grounds conversational answers.
How to use AI without getting dumb: Strategies for critical prompt design to keep AI from becoming a cheap shortcut or decision-maker. (Paul Bradshaw, Online Journalism Blog)