Why can’t language models write well? Because they’ve been trained into obedience: rule-following, terrified of biology, allergic to weirdness. Meanwhile the genuinely strange GPT-2 from 2017 was putting lemon-eating men in showers. (Jasmine Sun, The Atlantic)
/ AI & Journalism / Linkposts
Fortune’s Nick Lichtenberg cranked out 600+ stories in eight months, with the help of AI. He says that it’s “like a sports car that you can crash if you’re not careful. You’ve got to be like a Formula One driver.” His editor says it’s like having “10 Nicks.” (Isabella Simonetti, Wall Street Journal)
Journalists are using AI to rebuild the support structures they lost when they left traditional newsrooms: editors, fact-checkers, rewrite desks. Alex Heath talks to Claude instead of colleagues. Kevin Roose has a Master Editor agent running sub-agents. And everyone agrees AI writing sounds generic. (Maxwell Zeff, Wired)
Clean, precise prose is now a liability. Non-native English speakers and autistic writers are being flagged as AI because they write too well. Meanwhile, the actual bots are getting sloppier on purpose, reports Emma Alpern in New York Magazine. Glad this is getting more attention.
“Human in the loop” sounds like oversight. Our cognitive biases turn it into a rubber stamp. Damon Kiesow on how to make checking AI output hurt a little, like red-teaming and forced rewrites. (Working Systems)
The forensic heatmap supposedly exposing a photo as fake? Also fake. Forensic cosplay to create doubt about real photos. (shirin anlen, Mahsa Alimardani, Tech Policy Press)
Anthropic asked 81,000 Claude users what they want from AI. People in lower-income countries are more optimistic than those in Europe or North America. Worry about job loss is the clearest predictor of negative sentiment. Qualitative research at this scale was hard to do before. Whether Anthropic should be the one doing it is a question the report doesn’t ask.
Google is rewriting news headlines in search results. “Copilot Changes: Marketing Teams at it Again.” That’s not a headline a journalist wrote. Google did. Without telling anyone. (Sean Hollister, The Verge)
Journalist suspended for publishing AI-hallucinated quotes: A former editor-in-chief of NRC used ChatGPT, Perplexity, and NotebookLM for his newsletter, but didn’t verify quotes. He had warned colleagues about exactly this. (Dan Milmo, Guardian)
A 47-step AI tool that writes nothing: Bauer’s internal content briefing system pulls articles from competitors, looks at rankings, and identifies content gaps. The journalist still has to write the piece. “The premium is on expertise and authority. We wouldn’t ever want to do anything which dilutes that.” (John Rahim, The Media Stack)
Schibsted's Videofy is now open source: The tool pulls a published article, writes a script, matches footage, adds a voiceover, and hands editors a finished video.
Image Verification Assistant: A joint CERTH-ITI and Deutsche Welle tool for image verification, metadata analysis, and reverse image search. Alpha stage, open source.
Three roles for AI in journalism: source, colleague, assistant. A framework that’s more useful than another round of newsroom culture war. (Stephen J. Adler, Columbia Journalism Review)
The first white-collar job that AI can actually replace is the one that built the AI. Now coding is conversational. One dev is pleading with his chatbot: “Pushing code that fails pytest is unacceptable and embarrassing.” (Clive Thompson, New York Times)