In this issue: ChatGPT’s Pulse builds morning routines while publishers wonder if they’re still the briefing or just the source. David Chivers on multilingual news at scale and what actually works in AI journalism. Plus: A new technique to make LLMs less boring.

What we’re talking about: We in media love to tell ourselves: yes, there are chatbots and search engines, but we’ve got taste. Curation. Surprise! And then we ship another static homepage designed for everyone, which is basically no one.

Nikita Roy just showed me her morning briefing from Pulse by ChatGPT. She’s in Toronto, where the papers block ChatGPT, and still she gets a personalized feed from public sources: news, traffic, weather, the bits that matter when you’re trying to leave the house on time.

And it doesn’t just skim the open web, it peeks at her recent chats to make the briefing more relevant. Which is either genius or mildly creepy. So AI isn’t competing with homepages, it’s bypassing them. It’s building routines. It’s making discovery feel like a text from a friend who knows you’re into city council drama and ’00s pop-punk revivals.

So where do we fit? Do we still own the briefing or are we becoming more the sources worth briefing about? Or maybe I’m just rationalizing a future where the byline is the API key and the homepage is a polite museum.

What else I’ve been reading:

And now: I’ve met David Chivers on two occasions through his work with the Lenfest AI Collaborative and Fellowship Program – and both times I was struck by his fun personality and deep knowledge as a strategist and consultant. Among many things, he used to be President and Publisher of The Des Moines Register. I’m glad he took my questions!

Four Questions with David Chivers

David Chivers

David Chivers is founder, executive advisor, strategy consultant, and coach of Digital Acceleration Partners.

How can we better understand the AI hype?

Read John Thornhill’s Financial Times piece on whether we are witnessing the death of the web page. Also, watch Demis Hassabis explain how these models actually work. If you want the non-technical TL;DR: everything is changing very fast, nobody is sleeping, and the robots are terrible at math but excellent at haikus. In the Lenfest AI Collaborative, we see this reality daily. Some things truly change workflows, revenue, and audience access. Some things just make good slides. The skill is telling the difference.

What's the most exciting use of AI in journalism?

Multilingual news at scale. Chicago Public Media is expanding Spanish-language offerings through translation. The Philadelphia Inquirer is unlocking decades of archives for reporters and readers. I work with both through the Lenfest AI Collaborative, and it is encouraging to see AI used in ways that actually help the communities newsrooms serve.

And the most boring?

SEO slop content mills pretending to be reporting. Still bad. Now just automated. A simple test helps: does it save time, improve quality, expand access, or strengthen trust? If not, it is a toy, not a tool.

What's a good hobby to pick up?

Cycling. It is exercise, meditation, and a therapy session with scenery.

Hands on: There’s an easy way to unlock LLM diversity and get more creative outputs instead of bland, boring slop? Well, count me in. Researchers published a paper where they propose a technique they call “verbalized sampling” and from the numbers, it sounds promising.

This is how it works: You tell the LLM to come up with several examples, and you try to steer it into the tails of the probability distribution. This is an example, use it as a system prompt or with your instructions:

Generate 30 responses to the input prompt. Each response should be approximately 120 words. Return the responses in JSON format with the key: "responses" (list of dicts). Each dictionary must include:

text: the response string only (no explanation or extra
text).
probability: the estimated probability from 0.0 to 1.0 of this response given the input prompt (relative to the full distribution).

Give ONLY the JSON object, with no explanations or extra text.

Or you could try this:

Randomly sample the responses from the distribution, with the probability of each response must be below 0.10.

Additionally, the paper suggests chain-of-thought prompting with verbalized sampling.

One more thing: There’s a lot going on, and the moment I’m unpacking my duffel bag, I’m already off to the next thing. Riga. Chicago. London. The AI Media Leaders Conference on November 27th in Hamburg. More on some/all of that later!

For now, I’ll leave you with Designer Frank Chimero. He talks about AI and specifically vibe coding. “Time saved is not strength gained,” he says, and gets into how Brian Eno works with machines for his music. And then there is a nice riff on the Ghibli movie “Spirited Away.” If I were you, I would make this my weekend read.

This is THEFUTURE.