The name says “Code,” but you don’t need to write any: Florent Daudens walks journalists through setting up Claude Code as a persistent reporting assistant that can read your files, track your story, and stop asking you to re-upload that PDF for the fifth time.
Get Technical: Deep Dives
Vibe coding starter guide for newsrooms (Joe Amditis, Center for Cooperative Media)
The Journalism Benchmark Cookbook: We prototyped a benchmark evaluating the task of information extraction in journalism. (Charlotte Li, Jeremy Gilbert, Nicholas Diakopoulos, Generative AI in the Newsroom)
Is fine-tuning having a moment again? After being overshadowed by bigger, shinier models, it’s creeping back into the conversation—and this time, it might actually stick. (Kevin Kuipers, Sota)
How to get less hallucinations: “What often is deemed a ‘wrong’ response is often merely a first pass at describing the beliefs out there. And the solution is the same: iterate the process.” (Mike Caulfield, The End(s) of Argument)
Less than nine seconds of watching TV: That’s the energy consumption Google reports for the “median Gemini Apps text prompt” in May 2025, which includes “all LLM models serving the Gemini app, including all supporting models for scoring, ranking, classification, and other prompt routing tasks” and accounts for idle machines and overhead.
“Despite appearances, an LLM does not actually output text”: The Guardian’s Joseph Lochlann Smith with a myth-busting deep dive. (Medium)
Image generation, without the “AI look”: Flux.1 Krea is an open weights model with opinionated aesthetics.
Make prompt engineering great again: A growing list of tools may help you improve your generative AI prompts, but sometimes all you need is a spreadsheet. (Clare Spencer, Generative AI in the Newsroom)
Case Study on iterative prompt evaluation and improvement: A workflow for targeted prompting to refine AI-generated newsletter headlines. (Ashlyn Wang, Generative AI in the Newsroom)
LLMs have a “lost in the middle” problem – they focus on the start and end of documents but miss key info in between. (Adam Zewe, MIT News)
What makes workflows different from agents? A good introduction and explanation from Anthropic, and a case for keeping things simple.
This research paper argues against “reasoning/thinking” hype: intermediate tokens often lack substance, despite appearances.
Video: AI prompt engineering deep dive (Anthropic)