The mental fog and decision paralysis that come from overseeing too many AI tools at once. Coined by researchers at Boston Consulting Group in a January 2026 study of 1,488 U.S. workers. The culprits are AI oversight and workload creep. The sweet spot for parallel tools is three, after that, self-reported productivity drops. Bedard, Julie...
/ AI & Journalism
Jagged Frontier
The invisible boundary between what AI can and cannot do. Coined by Ethan Mollick and co-authors in a working paper based on experiments with consultants. Tasks that seem equally hard can land on opposite sides. Idea generation: easy for AI. Basic arithmetic: surprisingly not. Without extensive hands-on experience, you won't know which is which until...
Rage Code
When someone builds a competing product out of spite, within hours, to prove a point. Used in March 2026 after developer Yash Bhardwaj threatened to open-source his app exactly one minute after an obnoxious dude announced he'd clone it for free using Claude Code. Vibecoding's angry cousin.
Fortune’s Nick Lichtenberg cranked out 600+ stories in eight months, with the help of AI. He says that it’s “like a sports car that you can crash if you’re not careful. You’ve got to be like a Formula One driver.” His editor says it’s like having “10 Nicks.” (Isabella Simonetti, Wall Street Journal)
Journalists are using AI to rebuild the support structures they lost when they left traditional newsrooms: editors, fact-checkers, rewrite desks. Alex Heath talks to Claude instead of colleagues. Kevin Roose has a Master Editor agent running sub-agents. And everyone agrees AI writing sounds generic. (Maxwell Zeff, Wired)
Clean, precise prose is now a liability. Non-native English speakers and autistic writers are being flagged as AI because they write too well. Meanwhile, the actual bots are getting sloppier on purpose, reports Emma Alpern in New York Magazine. Glad this is getting more attention.
“Human in the loop” sounds like oversight. Our cognitive biases turn it into a rubber stamp. Damon Kiesow on how to make checking AI output hurt a little, like red-teaming and forced rewrites. (Working Systems)
The forensic heatmap supposedly exposing a photo as fake? Also fake. Forensic cosplay to create doubt about real photos. (shirin anlen, Mahsa Alimardani, Tech Policy Press)
Anthropic asked 81,000 Claude users what they want from AI. People in lower-income countries are more optimistic than those in Europe or North America. Worry about job loss is the clearest predictor of negative sentiment. Qualitative research at this scale was hard to do before. Whether Anthropic should be the one doing it is a question the report doesn’t ask.
Google is rewriting news headlines in search results. “Copilot Changes: Marketing Teams at it Again.” That’s not a headline a journalist wrote. Google did. Without telling anyone. (Sean Hollister, The Verge)
Journalist suspended for publishing AI-hallucinated quotes: A former editor-in-chief of NRC used ChatGPT, Perplexity, and NotebookLM for his newsletter, but didn’t verify quotes. He had warned colleagues about exactly this. (Dan Milmo, Guardian)
Tokenmaxxing
The use of as many AI tokens as possible, as fast as possible. Enabled by agentic coding tools that spawn subagents running unsupervised. Writing an essay uses around 10,000 tokens. Tokenmaxxers blow through billions a week, burning the planet along the way. Fat token budgets have become a job perk, and some rack up monthly...
