In this issue: Elitist panic over amateur writers. 93 AI journalism initiatives mapped out. The BBC goes all-in on generative AI. Model mayhem at OpenAI. And Hugging Face’s Florent Daudens on Software 3.0 and why we’re all becoming coders just by talking to machines.

What we’re talking about: The Washington Post wants to help readers write opinion pieces. A tool called Ember could guide the writing process, pointing out weak arguments or suggesting better structure. To me, it sounds like saying “democratizing voices” while whispering “we need cheap content”, but that’s beside the point.

Because it’s AI-assisted, we get elitist panic. “Will The Washington Post Embrace the AI Slush Pile?” asks Amanda Katz in The Atlantic. Her argument: Most opinions aren’t interesting, and AI can’t fix that—it only produces “the kind of writing I have spent my whole career trying to hold back.” At this point, you wonder who she dislikes more: AI, readers, or herself.

Even progressive tech publication The Verge puts “amateur writers” in its headline, as if helping people who aren’t professional opinion-havers structure their thoughts is somehow scandalous.

But more voices, better tools to express them? That doesn’t sound like a crisis to me. It might just be an interesting use case for AI.

(Funny enough, at the same time, professional journalists are seeking help from an app called Sophiana by journalist and newsfluencer Sophia Smith Galer to repackage their texts for vertical video.)

More on AI and writing: “AI is better than most humans at producing prose. In a couple years, it will be better than most ‘professional writers’ as well.” This is the first of 23 observations from Jasmine Sun, a writer and former product manager in San Francisco who’s researching AI.

Two more: “AI hallucinates, humans misremember. Yet neuroscience suggests that memory and imagination are one and the same. As Demis Hassabis’ PhD thesis found, amnesiacs make shoddy novelists.” And: “ChatGPT catches me in a lie approximately as often as I catch it.” The observations layer nicely, with a provocation or two thrown in. I really liked it.

Previously, I recommended an article arguing that the meaning of a piece of writing doesn’t depend on the identity of the author, and LLMs expose how the authority of the author was always a fiction. Another article concluded that readers now see text as fungible.

What else I’ve been reading:

Update: Last week’s Who’s Who in the German-Speaking LinkedIn Universe got a lot of attention—11k views on LinkedIn, 1.8k on my site. The response has been fantastic.

Making it a thing: If you haven’t read my 10 Facts Everyone Should Know About AI, I really think you should. It’s on my website, not on LinkedIn. And: More blogging is coming, and I’m calling it Summer of AI.

That’s risky: When AI in the newsroom still feels like a geeky side project. One of the things Florent Daudens shares in the next section. He thinks we’re all becoming coders, and that the biggest blocker to AI adoption is no longer the tech.

Three Questions with Florent Daudens

florent daudens

Florent Daudens is Press Lead at AI-company Hugging Face. He’s based in Montreal, Canada.

How can we better understand the current AI hype?

I enjoyed reading “AI as Normal Technology” by Arvind Narayanan and Sayash Kapoor recently. Their core point is that AI isn’t magic or menace; it’s a tool shaped by human decisions. Think electricity, not nuclear weapons.

I get why people worry about long-term AI risks. But a lot of that talk unfortunately feels like marketing and overhype. We’re also still fuzzy on the basics like what even is intelligence? It also echoes this really good paper, “Stop treating ‘AGI’ as the north-star goal of AI research”, co-signed by one of my colleagues, Margaret Mitchell.

The dystopia-vs-utopia framing doesn’t help. Narayanan and Kapoor argue for something more grounded: what’s happening in the research labs is not the reality of actual deployment, where diffusion is “limited by the speed of human, organizational, and institutional change.” It’s a reminder that with any tech, we tend to overestimate the short-term impact and underestimate the long-term shifts.

That hits close to something I’ve been thinking about non-stop: AI is driving this massive disintermediation wave, and in journalism, that means we urgently need to rethink how we build for audiences. It might be less about the tech, but more about driving change in newsrooms.

Are we taking AI seriously enough?

Surprisingly, I don’t think we are, at least not across the board. In many newsrooms, AI still feels like a geeky side project. That’s risky, because integrating AI in meaningful ways requires collective understanding, not just of what the tech can do, but what it can’t. And the best ideas for using it rarely come from a top-down directive; they emerge when people across teams understand the possibilities and start experimenting.

The most important question for the industry right now, in my opinion, is: how do we accelerate innovation and actually ship products that make a real impact? Not just cool experiments or one-off demos, but tools that genuinely change how we tell stories and serve our audiences.

That means building workflows and products with AI, not just reacting to it. Especially now, as tech companies move higher up the stack, shifting their focus from models to end-user experiences.

The good news is, the industry does have some solid fundamentals. As a former newsroom manager, I’ve seen just how tough digital transformation has been. The newsrooms that adapted best were the ones that broke down silos, built cross-functional teams, and made digital everyone’s job.

The biggest blocker isn’t the tech, it’s when the leadership doesn’t fully get it. From the outside, you can see the difference: some newsrooms are experimenting and shipping new things, others are stuck in neutral, even in how they cover AI.

Got any follow suggestions?

Andrej Karpathy is one of the rare AI researchers who can effortlessly shift between accessible, pedagogical videos for a general audience and deep dives into highly technical topics. His recent talk at Y Combinator is both fascinating and enlightening. It’s a masterclass in computer history and a powerful lens for understanding what’s happening in AI today. He coined terms like Software 3.0 and vibe-coding, framing how we’re all becoming coders simply by talking to machines in natural language.

He’s also refreshingly honest about the limitations. To him, LLMs are like “fallible people spirits”: they have superhuman knowledge but suffer from cognitive quirks like hallucinations and memory lapses. What resonated most with me was his vision for collaboration: rather than aiming to be replaced, we should focus on working with the machine, and creating efficient generation-verification loops. There’s a lot to reflect on here, especially for journalism.

Too many ChatGPT models: If you are confused as everyone else about OpenAI’s lineup, you’re not alone. But there is a chart attributed to Andrej Karpathy making the rounds on Reddit and LinkedIn. tl;dr: Use GPT-4o for everything easy and fast, o3 for everything hard or important, GPT-4.1 for coding. If you like GPT-4.5 for creative writing, bad luck: it’s being phased out.


Hands on: I made Looping Claudie, a script that runs a snippet of text through a prompt with Claude, editing it over and over again. It’s micro-data refinement. Before each iteration, I ask if the facts still hold. If not, the model intervenes.

My first prompt was this:

Help me edit this text. Adjust the structure for clarity and impact. Your main goal is originality, sharpness, punch. Only reply with the final edit.

Which gave me outputs like this:

Last week detonated.

Who’s Who in the German-Speaking LinkedIn Universe for journalism and AI: 10k LinkedIn views, 1.7k website hits. Then 10 Facts Everyone Should Know About AI went nuclear.

The hunger is feral.

I’m feeding it raw. More posts coming. Scalpel-sharp. Zero fat.

Welcome to the Summer of AI.


Utter cringe—the sort of text that floods the ruins of Twitter these days. So I wrote some more prompts, trying to turn this into a polishing factory for prose. Still not perfect. It certainly is not very inventive, Jasmine Sun would call it uninventive. This was one of my better attempts:

You are a helpful editor. Transform this text to be more engaging and memorable. Better means: conversational and personal rather than performative, surprising without being gimmicky, delightful to read without educational arrogance. Avoid truncated "bro-speak" phrases, breathless hype, and artificial urgency. Instead, aim for natural flow and genuine enthusiasm. Maintain the original structure but polish for readability. Use US English. Reply only with the final edit.

Getting somewhere! Here’s a Google sheet with the original, compared to an edit with just the prompt and to Looping Claudie with four iterations.

Looping Claudie is loosely based on an experiment by sci-fi writer Robin Sloan. If you want to try it yourself, I’ve shared the Notebook. You’ll need a Claude API key, save it as “ANTHROPIC_API_KEY”, and grant the Notebook access to it (no, it’s not shared with me). Look for the “refinement_prompt” in the code. You can switch between the prompts by moving the # symbol (which indicates a comment) from one line to the other. Or try your own.

One more thing: If you’re reading about the viral MIT study that turned out to be a meta-commentary on how the public consumes scientific studies through summarization and AI, THEFUTURE told you about it a week ago.

This is THEFUTURE.