We're figuring this out. I track, test, and write about how AI is reshaping news, products, and workflows. You'll find linkposts, essays, a weekly(ish) newsletter, a glossary, and experiments. It's a bit of a mess. But so is everything right now.
42
We're figuring this out. I track, test, and write about how AI is reshaping news, products, and workflows. You'll find linkposts, essays, a weekly(ish) newsletter, a glossary, and experiments. It's a bit of a mess. But so is everything right now.
A popular way to run local AI models, Ollama, seems to be one of the shadiest (Zetaphor, Sleeping Robots). I’ve switched to LM Studio recently and use it with Gemma4, highly recommended.
Is the “stochastic parrot” framing both empirically wrong and harmful to AI ethics? (SE Gyges, Very Sane AI)
Did you notice the Guardian gleefully chronicles the firing of senior journalists for making errors while using AI? Made-up quotes, plagiarized sentences, they should have known better, they knew better. Now they have to suffer. Full name and picture. Meanwhile, a broader audience comes to terms that books are sometimes written with the help of...
Is it wrong to write a book with AI? Joshua Rothman compares AI-generated fiction to the Roland TR-808 drum machine: once despised, now everywhere. The analogy is fun, the question is real, and the “Shy Girl” scandal gives it actual stakes. (Joshua Rothman, The New Yorker)
Fortune’s Nick Lichtenberg uses AI to write articles, got profiled by the WSJ, and the journalism internet lost its collective mind. Now the Reuters Institute interviews him. Working with AI has “pushed me into more original reporting, because all algorithms and all AI products are necessarily backward-looking.”
“AI responses may include mistakes,” Google says at the bottom of its AI Overviews. How many? According to a startup, 1 out of 10 summaries is problematic. That’s “millions of erroneous answers every hour.” (The New York Times)
Peter Stuart’s AI-powered Velora Cycling runs on a custom CMS that automates everything from news discovery to pre-publish fact-checking. But the real business turns out to be licensing the tool. (Kari McMahon, A Media Operator)
What’s it with the savior narrative? Sebastian Mallaby's biography of Demis Hassabis (Google DeepMind) asks if one decent man can steer AI development. It’s good on the philosophy and the personalities, less good on the hard questions. At times the awestruck tone is difficult to swallow. (Gideon Lichfield, The Economist)
Ghost Pepper: Hold-to-talk speech-to-text for macOS. Local, powered by WhisperKit and local LLM cleanup.
apfel is the free AI already on your Mac. macOS Tahoe ships with a 3B parameter LLM (quick tasks yes, complex reasoning no).
There’s a rule: any headline phrased as a question can always be answered with no. So when The New Yorker asks whether you can trust OpenAI’s Sam Altman to make the right calls on how A.I. is used “in immigration enforcement, domestic surveillance, and autonomous weaponry in war zones” – what do you think the answer is? (Ronan Farrow and Andrew Marantz)