An Ars Technica story about an AI agent writing a hit piece on a human contained AI-fabricated quotes attributed to the human. The author was sick, rushing, and used ChatGPT. The article was pulled. (Emanuel Maiberg, 404 Media)
/ AI & Journalism
Time put “The People vs. AI” on its cover and profiled nine Americans fighting data centers, chatbot harms, and AI in hospitals. A companion essay argues AI policy has left the wonk phase and entered kitchen-table politics, but neither party in the U.S. knows what to say about it yet. (Andrew R. Chow / Rebecca Lissner)
“In 2026, it’s a scary time to work for a living.” That’s how the Guardian launches Reworked, a yearlong series on AI and the future of the job. The same technology that’s making software engineers nervous is making them realize they have more in common with warehouse workers than with their CEOs. (Samantha Oltman)
You can just build things
My website was put together with a text editor and a "better done than perfect" attitude. I added new things here and there. Fiddled around. Over time, it got messy. And I never built a dark mode, because I feared the time it would take to rebuild everything. But you're looking at a much improved...
Just send the prompt twice? A new paper argues that repeating helps non-reasoning models. There’s a catch: The models tested (4o, Claude 3.7) are retired by now.
Paul Ford (who wrote a legendary 38,000-word piece on “What Is Code”) on his new obsession with vibe coding: He’s building apps on the subway that he’d once have billed clients $350,000 for. “It stings to be made obsolete, but it’s fun to code on the train, too.” (New York Times)
Is the image even real? Can we verify the facts?
Those questions framed the conversation at last Thursday's AI for Media Network gathering in Hamburg. 120+ representatives from media organizations and academia met to discuss AI in verification and research. It was the first time the event was hosted at SPIEGEL-Gruppe's Hamburg offices. Gerret von Nordheim, deputy head of SPIEGEL's fact-checking department, presented our in-house...
LLM council
An LLM council is what it sounds like: you stop treating a large language model like a single all-knowing oracle, and instead you run a mini editorial meeting. You ask multiple models, or the same model in multiple roles, to weigh in on the same question. Then you force them to disagree, critique, and verify...
Links to a talk on trust and AI
Taken out of context without the talk at AI for Media somewhat incomprehensible. Google DeepMind: SynthID olereissmann.com: Hands on with SynthID Guardian: AI images of Maduro capture reap millions of views on social media Guardian: White House posts digitally altered image of woman arrested after ICE protest MDR: Sächsische Polizeigewerkschaft verteidigt KI-Bild Samira El Ouassil:...
Substack hosts, algorithmically promotes, and takes a revenue cut from newsletters openly pushing Nazi ideology, Holocaust denial, and white supremacy. (Geraldine McKelvie, The Guardian)
There’s no neat technical fix for that: The more useful an agent is, the more access it needs, and the more access it has, the riskier it gets. Yes, it’s about the Moltbot/OpenClaw agent craze, but also, it’s not. (Dan Hon, Things That Caught My Attention)
Moltbook, the viral “social network for bots,” looked like a glimpse of the AI agent future. It wasn’t. And some of it was humans shitposting. (Will Douglas Heaven, MIT Technology Review)
What is Claude? Anthropic doesn’t know either. Gideon Lewis-Kraus spent months inside the company, watching researchers try to understand their own AI. A good example of how to write about this technology without falling for the hype or waving it all away. (The New Yorker)
Why authenticity labels and AI watermarks are failing: Verge reporter Jess Weatherbed explains on the Decoder podcast why media authentication standards like C2PA are going nowhere. And why watermarking AI content isn’t working either.