In this issue: OpenAI gives some insight into what 2.6 billion daily requests actually look like. Why journalism has a fighting chance with the help of user needs. Cecilia Rikap on why we’re dumbing ourselves down to match AI’s limitations. Plus: How ChatGPT trained me for the Berlin Marathon and then watched me suffer in the heat.

What we’re talking about: OpenAI has published what it says is the largest study to date on ChatGPT usage. Based on real chats from their users, excluding business accounts and API usage.

The user base is growing and young—nearly half are between 18 and 25 years old. Based on names, OpenAI estimates that more requests now come from women than men. OpenAI counts 2.6 billion requests per day (Google counts around 14 billion searches).

And what are users doing with ChatGPT? A quarter of requests fall into the category of “seeking information.” If you add “practical guidance”, that adds up to almost half of all conversations. Ali Mahmood mapped the dominant ChatGPT uses to the User Needs model and arrives at three strategic questions for journalistic offerings.

His conclusion is worth reading, so just briefly: Are we just a news service, or do we help people learn and act? Our product and messaging should match whichever we choose.

What else I’ve been reading:

And now: Time for a reality check. While some are debating whether AI will replace us, Cecilia Rikap argues we’re already replacing ourselves—downgrading our own thinking to match what machines can do. Spoiler alert: that’s not progress. Her new book, “The Rulers: Corporate Power in the Age of AI and the Cloud”, will come out with Verso in 2026.

Three Questions with Cecilia Rikap

Cecilia Rikap

Cecilia Rikap is Associate Professor in Economics and the Head of Research at the Institute for Innovation and Public Purpose of the University College London (UCL).

How can we better understand the current AI hype?

Paris Marx’s “Tech Won’t Save Us” podcast not only brings a lot of depth and critical analysis on what’s going on in the tech world, but it also connects that with issues of political economy, geopolitics. He does great work in mapping who’s out there and what the key issues are, and trying to be as diverse as possible in terms of the interviewees. I discover people with new insights on topics that I’m following.

What's one fact about AI that everyone should know?

The more we think about AI as something magical, the more we lose sight of the fact that it’s advanced statistics. Why do we constantly need to compare a statistical model with human intelligence? If we take that for granted, then we have a very narrow way of thinking about human intelligence. 

If all we do is just rework what’s already out there and present it in a stylized way, then yeah, of course, we can be replaced by AI. 

But if we start thinking that the creative tasks that both journalists, scholars, and many others do in their everyday life go beyond what would be described as just looking at the data that is out there and trying to create an output on that basis, which assumes intelligence encompasses more, then definitely AI cannot replace our work.

AI is pushed on us for everything, even as a method of invention. Our ability to create paradigm shifts will be constrained, simply because AI cannot go beyond what was already done before. I think AI really has a lot of implications on how people think. We will become closer to AIs, not because AI progressed, but because we are downgrading our capacity of creative thinking. And this has already been shown even in papers offered by people from Microsoft. If even Microsoft is saying that, imagine what is really going on.

What future are you looking forward to?

We need to work more on a future that is inclusive to everyone, where technological development empowers everyone, where such development only occurs as long as it’s democratic, it’s open, and complies with planetary boundaries.

I’d like to see a future where we are capable of going ultimately beyond the imperative of growth. Underlying that imperative is the fake promise that technology will bring progress and prosperity and a better life for everyone. Yet none of this is being delivered. Not every innovation brings economic growth. Economic growth does not necessarily bring progress. In the midst of an ecological crisis, this becomes even clearer.

When people say that AI will extinct us all, I think that the question is irrelevant because we’re already heading toward extinction on our own – by ignoring the ecological crisis, capitalism is already walking towards human extinction and destroying the planet.

We all want to live better, but we need to first of all understand why we’re living so badly today. That means trying to understand capitalism and why it has systemically failed most of the population. That’s crucial for crafting collectively, democratically from today, the future that we want to live in.

Hands on: I let ChatGPT train me for the Berlin Marathon, hoping to break Sub-3—three minutes faster than my personal best. The plan was ambitious, with a massive training load, and I found myself wondering: Would GPT-5 break me instead?

I followed Chat’s plan religiously. After each run, I uploaded the Strava activity. I ran a spectacular half marathon in the process. I was in shape! One day before the race, Chat seemed as confident as ever:

“You’ve ticked all the boxes: speed, threshold, long-run finish, 10K sharpener, HM confirmation. We stay the course. Execute the taper, nail the logistics, and run the plan. Sub-3 is on.”


Reader, I did not break Sub-3. And somehow, it wasn’t even Chat’s fault.

Last year, Berlin saw a very nice 8°C/46°F at the starting line. This year? It was already 20°C/68°F in the morning and climbing by the minute.

While the first half went according to plan, the heat got me in the second half, adding a whopping 17 minutes to my goal time. Running in 27°C/81°F weather, I suffered.

Chat, knowing the outcome, would you have advised differently?

“The core build was right. (…) You didn’t lose to training. You lost to environment plus pacing choice for that environment. Fix the heat protocol and the ‘ego-guard’ on the first 10–15 km, and you’ll cash that fitness on the next fair-weather course.”


Chat was right all along. It just didn’t have the context. No weather information was taken into consideration. It was my fault after all.

One more thing: A very good post by Jesse Vincent on Threads:

Apple AI summaries are wild. I feel like one of these three things is supposed to be more important than the other two and also probably isn’t.

This is THEFUTURE.