In this issue: I got interviewed—22 questions about AI tools, fails, and what makes me laugh (or cringe). The three tools I’d actually miss. And why “superintelligence” is just marketing dressed up as destiny.
What we’re talking about: Matthias Fiedler is a colleague of mine at SPIEGEL – he covers sports from Munich. In his newsletter StoryCodes, he gives journalists tips on AI, and occasionally he interviews a guest. And this time I got to be the one! Since his newsletter is published in German, here’s my English translation.
1. Which three AI tools would you miss most – and why do they work well for you?
GPT-5. When the mechanics click and the big thinking model fires up, writes some code or outlines a topic.
Claude Sonnet 4.5. I use it as my first reader and sparring partner. Does what it’s supposed to, doesn’t get weirdly chatty on me.
Google Colab with Gemini. Build small Python applications, run them in the browser, fix errors directly with AI.
2. What was your biggest AI fail so far?
I let headlines lead me astray. Classic. When you read breathless reports about studies showing what AI can’t do or has spectacularly failed at again, always look at the methodology: research often lags months behind current development. What held true for GPT-3.5 back in 2022 may not apply to GPT-5 anymore.
3. Which AI response made you actually laugh out loud recently?
There was this recent hope that AI couldn’t be funny. During new employee onboarding the other day, I put GPT-5 to the test: “What can you say to new employees but not in a relationship?” A question from that “wrong but funny” comedy format that’s big on TikTok. The answer: “The probation period is six months.” Everyone can decide for themselves who comes off worse here: my humor, ChatGPT, or German comedy.
4. Is there an AI tool hardly anyone knows about that everyone should use?
The media company Every doesn’t just publish good AI articles, they’re also building a text editor called Lex. Pretty interesting. What I’m still missing: AI autocomplete. When I’m typing, the machine should semantically search through my files and texts and serve up suggestions. Programmers have it better – their dev tools have had this forever. You can test it somewhat by using an AI code editor like Cursor as a word processor, creating a project and copying in your own texts or pulling them via MCP.
5. What is AI too dumb for?
“Dumb” is an anthropomorphization that doesn’t really help, right? World knowledge compressed into a chatbot – that’s anything but dumb. If the question is what these probability calculators aren’t suitable for, I’d say: replacing humans. The providers train emotional simulations into the models that fake understanding disturbingly well. People are even forming parasocial relationships with LLMs. We should really knock that off. It’s like the developers watched the movie “Her” and thought: Perfect, that’s exactly how we’ll build this thing. That film is from 2013 and it’s a damn dystopia.
6. What task do you never hand off to AI?
I still haven’t gotten into all these “second brain” tools. Fabric, Kortex, Mymind, whatever they’re all called – note-taking with AI assistance. I try them out, but I get more out of clicking through bookmarks myself, thinking, making connections. But hey, to each their own. Maybe I’ll come around eventually.
7. What do you do when ChatGPT starts making things up again?
Get excited. Seriously, I mess around with this stuff, I get outputs of varying quality, but when I’m trying to get ChatGPT to be creative, unexpected, fantastical and it actually works, that’s a pleasant surprise. Doesn’t happen often. These machines produce average by default – they’re boring and predictable by design, plus artificially constrained on top of that.
8. What do you let AI do even though you could do it yourself – and it’s almost embarrassing?
Proofread texts for simple errors. I used to be a copy editor.
9. Which AI tool that you originally used just for work do you now use personally too?
For me it works the other way around.