The information ecosystem has four new rules: production is cheap, machines are the audience, content is liquid, and intention beats attention. Shuwei Fang buries the current publishing model, and somehow ends up optimistic: journalism needs to stop protecting the article and start selling the process. (Reuters Institute)
Linkposts
tropes.md is a one-file blacklist of AI writing tells for your system prompt.
The vibe is Palantir cosplay: World Monitor by Elie Habib is a real-time “situational awareness” dashboard with threat feeds, geopolitical maps, and a three-stage AI classification pipeline. He calls it a “weekend hack.”
“Not ‘how do I get paid for my articles by AI’ but ‘how do I architect my knowledge so it can reach customers I’ve never had access to before'”: A three-layer AI monetization framework, with O’Reilly as proof of concept. (Florent Daudens, AI in the News)
BBC, FT, Guardian, Sky News, and The Telegraph are launching Spur, a coalition to set shared licensing standards for AI use of journalism. Not a collective licensing body, but wants to shape what pricing looks like. More publishers welcome. (Charlotte Tobitt, Press Gazette)
238k speeches: Guardian and UCL trained a non-generative ML model on 100 years of House of Commons debates. Both Labour and Conservative MPs are currently at or near their most hostile on immigration, driven by competition with Reform UK.
Which AI to use: Free users don’t get the good AI, and even if you pay, you have to pick thinking mode. Once you’ve done that, the model matters less than the harness. For newsrooms: curated archive retrieval, a system prompt encoding editorial voice, CMS integration, verification. (Ethan Mollick, One Useful Thing)
An Ars Technica story about an AI agent writing a hit piece on a human contained AI-fabricated quotes attributed to the human. The author was sick, rushing, and used ChatGPT. The article was pulled. (Emanuel Maiberg, 404 Media)
Time put “The People vs. AI” on its cover and profiled nine Americans fighting data centers, chatbot harms, and AI in hospitals. A companion essay argues AI policy has left the wonk phase and entered kitchen-table politics, but neither party in the U.S. knows what to say about it yet. (Andrew R. Chow / Rebecca Lissner)
The Independent Journalism Atlas is building a database of people doing journalism outside traditional newsrooms. It maps creators by beat, format, business model, and audience.
“In 2026, it’s a scary time to work for a living.” That’s how the Guardian launches Reworked, a yearlong series on AI and the future of the job. The same technology that’s making software engineers nervous is making them realize they have more in common with warehouse workers than with their CEOs. (Samantha Oltman)
Just send the prompt twice? A new paper argues that repeating helps non-reasoning models. There’s a catch: The models tested (4o, Claude 3.7) are retired by now.
Paul Ford (who wrote a legendary 38,000-word piece on “What Is Code”) on his new obsession with vibe coding: He’s building apps on the subway that he’d once have billed clients $350,000 for. “It stings to be made obsolete, but it’s fun to code on the train, too.” (New York Times)
Substack hosts, algorithmically promotes, and takes a revenue cut from newsletters openly pushing Nazi ideology, Holocaust denial, and white supremacy. (Geraldine McKelvie, The Guardian)
There’s no neat technical fix for that: The more useful an agent is, the more access it needs, and the more access it has, the riskier it gets. Yes, it’s about the Moltbot/OpenClaw agent craze, but also, it’s not. (Dan Hon, Things That Caught My Attention)