Peter Stuart’s AI-powered Velora Cycling runs on a custom CMS that automates everything from news discovery to pre-publish fact-checking. But the real business turns out to be licensing the tool. (Kari McMahon, A Media Operator)
/ AI & Journalism
What’s it with the savior narrative? Sebastian Mallaby's biography of Demis Hassabis (Google DeepMind) asks if one decent man can steer AI development. It’s good on the philosophy and the personalities, less good on the hard questions. At times the awestruck tone is difficult to swallow. (Gideon Lichfield, The Economist)
Ghost Pepper: Hold-to-talk speech-to-text for macOS. Local, powered by WhisperKit and local LLM cleanup.
apfel is the free AI already on your Mac. macOS Tahoe ships with a 3B parameter LLM (quick tasks yes, complex reasoning no).
There’s a rule: any headline phrased as a question can always be answered with no. So when The New Yorker asks whether you can trust OpenAI’s Sam Altman to make the right calls on how A.I. is used “in immigration enforcement, domestic surveillance, and autonomous weaponry in war zones” – what do you think the answer is? (Ronan Farrow and Andrew Marantz)
“The New York Times has cut ties with a freelance journalist after discovering he used artificial intelligence to help write a book review that echoed elements of a review of the same book in the Guardian.”
Heretic strips the refusal behavior from any transformer-based language model automatically, no ML expertise needed. Decensoring Llama-3.1-8B-Instruct takes about 45 minutes with a modern graphics card.
A Basic AI Kit for the Newsroom: Stop sending one-off requests to a chatbot and start building a briefing folder with style guide, examples, constraints, source lists. A check list aimed at small newsrooms that can’t afford to experiment blind. (Alexey Terekhov, Internews)
Who’s hyping whom? We cover AI while AI puts our industry under pressure. Transformation has become the default framing. A special issue of Digital Journalism looks at the hype cycle.
Researchers found that chatbots, in their eagerness to please, are overly agreeable when giving interpersonal advice. GPT4o, Gemini-1.5-Flash, Claude Sonnet 3.7, and others are affirming users’ behavior even when harmful or illegal.
Why can’t language models write well? Because they’ve been trained into obedience: rule-following, terrified of biology, allergic to weirdness. Meanwhile the genuinely strange GPT-2 from 2017 was putting lemon-eating men in showers. (Jasmine Sun, The Atlantic)

