A Basic AI Kit for the Newsroom: Stop sending one-off requests to a chatbot and start building a briefing folder with style guide, examples, constraints, source lists. A check list aimed at small newsrooms that can’t afford to experiment blind. (Alexey Terekhov, Internews)
/ AI & Journalism
Who’s hyping whom? We cover AI while AI puts our industry under pressure. Transformation has become the default framing. A special issue of Digital Journalism looks at the hype cycle.
Researchers found that chatbots, in their eagerness to please, are overly agreeable when giving interpersonal advice. GPT4o, Gemini-1.5-Flash, Claude Sonnet 3.7, and others are affirming users’ behavior even when harmful or illegal.
Why can’t language models write well? Because they’ve been trained into obedience: rule-following, terrified of biology, allergic to weirdness. Meanwhile the genuinely strange GPT-2 from 2017 was putting lemon-eating men in showers. (Jasmine Sun, The Atlantic)
Brain Fry
The mental fog and decision paralysis that come from overseeing too many AI tools at once. Coined by researchers at Boston Consulting Group in a January 2026 study of 1,488 U.S. workers. The culprits are AI oversight and workload creep. The sweet spot for parallel tools is three, after that, self-reported productivity drops. Bedard, Julie...
Jagged Frontier
The invisible boundary between what AI can and cannot do. Coined by Ethan Mollick and co-authors in a working paper based on experiments with consultants. Tasks that seem equally hard can land on opposite sides. Idea generation: easy for AI. Basic arithmetic: surprisingly not. Without extensive hands-on experience, you won't know which is which until...
Rage Code
When someone builds a competing product out of spite, within hours, to prove a point. Used in March 2026 after developer Yash Bhardwaj threatened to open-source his app exactly one minute after an obnoxious dude announced he'd clone it for free using Claude Code. Vibecoding's angry cousin.
Fortune’s Nick Lichtenberg cranked out 600+ stories in eight months, with the help of AI. He says that it’s “like a sports car that you can crash if you’re not careful. You’ve got to be like a Formula One driver.” His editor says it’s like having “10 Nicks.” (Isabella Simonetti, Wall Street Journal)
Journalists are using AI to rebuild the support structures they lost when they left traditional newsrooms: editors, fact-checkers, rewrite desks. Alex Heath talks to Claude instead of colleagues. Kevin Roose has a Master Editor agent running sub-agents. And everyone agrees AI writing sounds generic. (Maxwell Zeff, Wired)
Clean, precise prose is now a liability. Non-native English speakers and autistic writers are being flagged as AI because they write too well. Meanwhile, the actual bots are getting sloppier on purpose, reports Emma Alpern in New York Magazine. Glad this is getting more attention.
“Human in the loop” sounds like oversight. Our cognitive biases turn it into a rubber stamp. Damon Kiesow on how to make checking AI output hurt a little, like red-teaming and forced rewrites. (Working Systems)
The forensic heatmap supposedly exposing a photo as fake? Also fake. Forensic cosplay to create doubt about real photos. (shirin anlen, Mahsa Alimardani, Tech Policy Press)
