In this issue: When ChatGPT fails at basic geography, who’s really holding it wrong? The Wall Street Journal’s Tess Jeffers on lightweight AI experiments and changing audience expectations. Plus: When AI tools promise to fix your prompts.

What we’re talking about: Is the new ChatGPT smarter now? Over the past few days, there have been examples that seemingly prove the opposite. People have asked it to create maps of Europe or Germany with poor results. Someone wanted to know exactly when Cisco introduced the C1101-4P router.

On one hand: yeah, really dumb. On the other hand: that’s just not how this works.

This reminds me of the iPhone 4. Fifteen years ago, Apple released an iPhone that didn’t play particularly well with human hands – reception got worse when you gripped it and unknowingly blocked the antenna in the bottom left corner. Apple’s response back then was basically: You’re holding it wrong.

When you spend some time with large language models, you eventually learn: this is lossy compression. With emphasis on: lossy. A Blurry JPEG of the Web. Details get blurred, and it’s almost an art to hit the sweet spot – where are the details still sharp and accurate, where do the models start to hallucinate?

And it’s pretty brazen, of course, that platforms put up chatbots and say: universal tools, mega-smart. And then largely leave users on their own, and when they rely on the output, it’s like: well, you didn’t read the fine print, and the AI fanboys laugh at you.

(When you tell GPT-5 not to paint a picture but to create code, it can be just as disappointing.)

What else I’ve been reading:

Low-investment, high-stakes: While newsrooms scramble to keep up with AI, Tess Jeffers is taking a different approach at The Wall Street Journal. Her take: lightweight experiments that prepare for a future where audiences need something different from journalism.