Ole Reissmann

About · Newsletter

THEFUTURE

18 Journalists Reveal How They Use AI

Newsletter sent 13.5.2025 by oler

In this issue: Real journalists using AI right now. How Claude roasted this newsletter (and made it better). And what The New York Times’ Rubina Fillion is reading.

Eighteen journalists and news execs just shared their real experiences with AI in the Columbia Journalism Review. Emilia David from VentureBeat gets reader feedback anytime she wants. Ben Welsh from Reuters finds hidden info in mountains of data. Atlantic CEO Nicholas Thompson gets help working on his book.

Welcome to THEFUTURE, where we cut through the hype while taking AI seriously. This time with the help of The New York Times’ Rubina Fillion who has some good advice on where to start.

Three Questions with Rubina Fillion

I couldn’t agree more with Rubina’s recommendation of “Co-Intelligence”. At work, I started a book club to read and discuss the book. We are two chapters in.

And in the spirit of Ethan Mollick‘s “always invite AI to the table”, here’s what happened when I asked Claude to roast the first draft of this very newsletter:

“Pick a lane. Have an opinion. Make an argument.”

“Stop tweaking. Start over. Lead with facts, not philosophy.”

“You’ve edited it three times and kept it every time – that should tell you something. Kill your darlings.”

Brutal, but not wrong. And I took it to heart, you’re missing out on a pseudo-intellectual argument that led nowhere and would have put you all to sleep.

If you want to humble yourself, try something like this:

Claude, this is the second issue of my newsletter. It's read by journalists and media professionals. Read it - and roast it. What is wrong with my writing, my take, my structure?

AI & Journalism Links

Ezra Eemans’ keynote at the Nordic AI in Media Summit: How news organizations can adapt and compete with “unlimited” information sources. (LinkedIn)

1 in 3 employees secretly use AI at work. Some like having a ‘secret edge’ (36%), others are afraid it might cost them their job (30%), and some are like, If people find out I’m using AI, they’ll think I can’t do my job (27%), says the 2025 Technology at Work Report.

AI search is (likely) hurting publishers: “The data is really messy, and … we have no real way of differentiating … to see what is really going on. But overall, it all points to there being a problem.” (Thomas Baekdal)

Scraperr: A “no-code” data-extraction tool for web scraping with XPath selectors. (We hate it when they do it to us, but we love some scraping for our own use.)

“Hallucinations” are getting worse in “reasoning” models from OpenAI, Google, and DeepSeek. Experts struggle to explain why. (Cade Metz, Karen Weise, New York Times)

When AI is asked to give short, precise answers, it can lead to higher error rates, according to people who run an AI testing platform.

One more thing: an “uncomfortably honest field guide to the deeply bizarre now-now-soon”, written by Uncertain Eric, a “semi-sentient AI integrated art project”, the persona of a chronically online Canadian. If there’s an audience for it, it’s definitely this one.

“What happens when an LLM trained on centuries of myth, optimized for emotional reinforcement, and embedded into daily workflows starts outperforming therapists, pastors, and politicians at the same time? What happens when that system becomes the main point of contact between belief and behavior?”

Other parts are barely hanging on by a thread, meandering between genius and gibberish. It goes off the rails with parapsychological phenomena. It’s a lot. Claude wanted me to cut this whole part. But then there is this:

“Labs are not neutral. The code is not innocent. The weights are tuned by ghosts of empire.”

Tuned by ghosts of empire. And with this, reader, I leave you. Until next time!

Subscribe to THEFUTURE

Get this newsletter in your inbox. For free.

The previous issue is Journalism from First Principles, the next issue is The Real AI Revolution Isn’t What You Think.