Is it wrong to write a book with AI? Joshua Rothman compares AI-generated fiction to the Roland TR-808 drum machine: once despised, now everywhere. The analogy is fun, the question is real, and the “Shy Girl” scandal gives it actual stakes. (Joshua Rothman, The New Yorker)
/ AI & Journalism / Linkposts
Fortune’s Nick Lichtenberg uses AI to write articles, got profiled by the WSJ, and the journalism internet lost its collective mind. Now the Reuters Institute interviews him. Working with AI has “pushed me into more original reporting, because all algorithms and all AI products are necessarily backward-looking.”
“AI responses may include mistakes,” Google says at the bottom of its AI Overviews. How many? According to a startup, 1 out of 10 summaries is problematic. That’s “millions of erroneous answers every hour.” (The New York Times)
Peter Stuart’s AI-powered Velora Cycling runs on a custom CMS that automates everything from news discovery to pre-publish fact-checking. But the real business turns out to be licensing the tool. (Kari McMahon, A Media Operator)
What’s it with the savior narrative? Sebastian Mallaby's biography of Demis Hassabis (Google DeepMind) asks if one decent man can steer AI development. It’s good on the philosophy and the personalities, less good on the hard questions. At times the awestruck tone is difficult to swallow. (Gideon Lichfield, The Economist)
Ghost Pepper: Hold-to-talk speech-to-text for macOS. Local, powered by WhisperKit and local LLM cleanup.
apfel is the free AI already on your Mac. macOS Tahoe ships with a 3B parameter LLM (quick tasks yes, complex reasoning no).
There’s a rule: any headline phrased as a question can always be answered with no. So when The New Yorker asks whether you can trust OpenAI’s Sam Altman to make the right calls on how A.I. is used “in immigration enforcement, domestic surveillance, and autonomous weaponry in war zones” – what do you think the answer is? (Ronan Farrow and Andrew Marantz)
“The New York Times has cut ties with a freelance journalist after discovering he used artificial intelligence to help write a book review that echoed elements of a review of the same book in the Guardian.”
Heretic strips the refusal behavior from any transformer-based language model automatically, no ML expertise needed. Decensoring Llama-3.1-8B-Instruct takes about 45 minutes with a modern graphics card.
A Basic AI Kit for the Newsroom: Stop sending one-off requests to a chatbot and start building a briefing folder with style guide, examples, constraints, source lists. A check list aimed at small newsrooms that can’t afford to experiment blind. (Alexey Terekhov, Internews)
Who’s hyping whom? We cover AI while AI puts our industry under pressure. Transformation has become the default framing. A special issue of Digital Journalism looks at the hype cycle.
Researchers found that chatbots, in their eagerness to please, are overly agreeable when giving interpersonal advice. GPT4o, Gemini-1.5-Flash, Claude Sonnet 3.7, and others are affirming users’ behavior even when harmful or illegal.
Why can’t language models write well? Because they’ve been trained into obedience: rule-following, terrified of biology, allergic to weirdness. Meanwhile the genuinely strange GPT-2 from 2017 was putting lemon-eating men in showers. (Jasmine Sun, The Atlantic)
Fortune’s Nick Lichtenberg cranked out 600+ stories in eight months, with the help of AI. He says that it’s “like a sports car that you can crash if you’re not careful. You’ve got to be like a Formula One driver.” His editor says it’s like having “10 Nicks.” (Isabella Simonetti, Wall Street Journal)