Ole Reissmann

About · Newsletter

THEFUTURE

When Curation Meets Automation

Newsletter sent 6.11.2025 by oler

In this issue: ChatGPT’s Pulse builds morning routines while publishers wonder if they’re still the briefing or just the source. David Chivers on multilingual news at scale and what actually works in AI journalism. Plus: A new technique to make LLMs less boring.

What we’re talking about: We in media love to tell ourselves: yes, there are chatbots and search engines, but we’ve got taste. Curation. Surprise! And then we ship another static homepage designed for everyone, which is basically no one.

Nikita Roy just showed me her morning briefing from Pulse by ChatGPT. She’s in Toronto, where the papers block ChatGPT, and still she gets a personalized feed from public sources: news, traffic, weather, the bits that matter when you’re trying to leave the house on time.

And it doesn’t just skim the open web, it peeks at her recent chats to make the briefing more relevant. Which is either genius or mildly creepy. So AI isn’t competing with homepages, it’s bypassing them. It’s building routines. It’s making discovery feel like a text from a friend who knows you’re into city council drama and ’00s pop-punk revivals.

So where do we fit? Do we still own the briefing or are we becoming more the sources worth briefing about? Or maybe I’m just rationalizing a future where the byline is the API key and the homepage is a polite museum.

What else I’ve been reading:

AI & Journalism Links

New paper: Which sources does ChatGPT provide in Germany when asked for the latest news? Main takeaway: Very different results depending on whether you use the customer chat interface or the API. Customers get more results from content partners, the API is more diverse or even fringe.

The BBC wants to make its AI use transparent to users – and is coming up with a new icon: Sparkles out, nondescript, boring, unobtrusive hexagon in.

A visit to the nonprofit that powers most of today’s AI training: Common Crawl ingests the web, journalism and all, unapologetically. The article falls a bit short on fair use and other archives, but it’s a good read. (Alex Reisner, The Atlantic)

How is the agentic web going? Amazon filed a lawsuit demanding Perplexity stop allowing its AI browser Comet to make purchases for users. (Shirin Ghaffary and Matt Day, Bloomberg)

This is not one of those “superintelligence is just around the corner” pieces, but rather asks if maybe we’re already learning too much about our own brains, demystifying thinking and its implications. You can’t know too much, can you? (James Somers, The New Yorker)

And now: I’ve met David Chivers on two occasions through his work with the Lenfest AI Collaborative and Fellowship Program – and both times I was struck by his fun personality and deep knowledge as a strategist and consultant. Among many things, he used to be President and Publisher of The Des Moines Register. I’m glad he took my questions!

Three Questions with David Chivers

Hands on: There’s an easy way to unlock LLM diversity and get more creative outputs instead of bland, boring slop? Well, count me in. Researchers published a paper where they propose a technique they call “verbalized sampling” and from the numbers, it sounds promising.

This is how it works: You tell the LLM to come up with several examples, and you try to steer it into the tails of the probability distribution. This is an example, use it as a system prompt or with your instructions:

Generate 30 responses to the input prompt. Each response should be approximately 120 words. Return the responses in JSON format with the key: "responses" (list of dicts). Each dictionary must include:

text: the response string only (no explanation or extra
text).
probability: the estimated probability from 0.0 to 1.0 of this response given the input prompt (relative to the full distribution).

Give ONLY the JSON object, with no explanations or extra text.

Or you could try this:

Randomly sample the responses from the distribution, with the probability of each response must be below 0.10.

Additionally, the paper suggests chain-of-thought prompting with verbalized sampling.

One more thing: There’s a lot going on, and the moment I’m unpacking my duffel bag, I’m already off to the next thing. Riga. Chicago. London. The AI Media Leaders Conference on November 27th in Hamburg. More on some/all of that later!

For now, I’ll leave you with Designer Frank Chimero. He talks about AI and specifically vibe coding. “Time saved is not strength gained,” he says, and gets into how Brian Eno works with machines for his music. And then there is a nice riff on the Ghibli movie “Spirited Away.” If I were you, I would make this my weekend read.

This is THEFUTURE.

Subscribe to THEFUTURE

Get this newsletter in your inbox. For free.

The previous issue is AI Studies and Gotcha Headlines, the next issue is When the CEO Says Don’t Trust the Product.