In this issue: Sloppypasta is everywhere and we need to talk about it. Tokyo Broadcasting System’s Emiko Kawabata on what happens to journalism when AI becomes the gateway to information. A tool for image verification. And: a Chrome plugin for people who just want the text.
/ AI & Journalism
A 47-step AI tool that writes nothing: Bauer’s internal content briefing system pulls articles from competitors, looks at rankings, and identifies content gaps. The journalist still has to write the piece. “The premium is on expertise and authority. We wouldn’t ever want to do anything which dilutes that.” (John Rahim, The Media Stack)
Sloppypasta
The term for pasting raw, unread AI output into a conversation, shifting the work of reading, verifying, and distilling onto whoever receives it. It's kind of rude, really, with real costs: eroded trust and frustration for recipients as the behavior becomes more common. The fix: read it, verify it, cut it down, disclose it, and...
Schibsted's Videofy is now open source: The tool pulls a published article, writes a script, matches footage, adds a voiceover, and hands editors a finished video.
Image Verification Assistant: A joint CERTH-ITI and Deutsche Welle tool for image verification, metadata analysis, and reverse image search. Alpha stage, open source.
Three roles for AI in journalism: source, colleague, assistant. A framework that’s more useful than another round of newsroom culture war. (Stephen J. Adler, Columbia Journalism Review)
The first white-collar job that AI can actually replace is the one that built the AI. Now coding is conversational. One dev is pleading with his chatbot: “Pushing code that fails pytest is unacceptable and embarrassing.” (Clive Thompson, New York Times)
Before you download a 35 GB model and watch your laptop give up: CanIRun.ai is a reality check for the local-AI-curious. Yes, there are options for an M2 with 8 GB RAM. Llama 3.1 8B is a “tight fit”, but Qwen 3.5 2B “runs great”.
Sloppelgänger
An AI imitation of a real person that is not just uncanny but embarrassingly bad. Coined by writer Ingrid Burrington as a play on "doppelgänger," the term names what happens when a company builds an AI persona around someone's name and reputation without their consent, and the result is clunky and potentially career-damaging. The Grammarly...
Chatbait
The chatbot equivalent of clickbait: follow-up questions, unsolicited offers, and proactive DMs designed to keep users engaged rather than to help them. Where clickbait lures you into opening a link, chatbait lures you into giving up information. The term was coined after the release of GPT-5 by Lila Shroff in The Atlantic – and will...
Can you tell which passage was written by AI? This New York Times quiz is humbling either way. (Kevin Roose and Stuart A. Thompson)
Grammarly “cloned” Julia Angwin, Stephen King, and Neil deGrasse Tyson as AI editors, without asking. Angwin is now suing. The feature is gone, the apology is filed, and the AI Julia apparently gave bad advice. (Miles Klee, Wired)
34 projects, 24 contributors, 65 minutes: Hacks/Hackers held a vibe coding show & tell, and journalists shipped fact-checkers, salary surveys, power grid monitors, and a Discord clone.
