/ / THEFUTURE /
In this issue: OpenAI’s erotica gambit and the business desperation behind it. Why “workslop” is costing companies two hours per task. Konrad Weber on drawing boundaries with AI before it’s too late. Plus: Claude’s new Skills feature turns fact-checking into a repeatable system.
What we’re talking about: OpenAI’s Sam Altman announced that “erotica for verified adults” is coming to ChatGPT in December. If anyone thought Elon Musk’s Grok’s “sexy mode” was cringe—or Mark Zuckerberg’s Meta with its “sensual” chats for kids was deeply weird—well, here we are.
For two days, journalists had a field day. “SexGPT” was everywhere in Germany. Sixty articles. Sixty. Was this all a masterclass in distraction from OpenAI’s actual challenges? Or just another case of throwing features at the wall and seeing what sticks?
Because here’s the thing: the Financial Times reported that out of 800 million ChatGPT users, only five percent actually pay for it. The company lost $8 billion in the first half of the year. And yet OpenAI committed to spending more than $1 trillion on AI infrastructure.
So now we’ve got shopping features, an app store, the video-fake app Sora, and… erotica. OpenAI definitely got the attention. But will it help the business? On one hand, they have enterprise clients, access to company data and business automation. On the other hand, they’re building loneliness monetization at scale. Moving humanity forward, one sext at a time.
What else I’ve been reading:
And now: We’re getting better at bending AI to our will. But Konrad Weber, a strategy consultant and foresight expert from Zurich, warns that customization without boundaries is just another way to lose control. His advice? Know where to draw the line—and actually draw it.
Three Questions with Konrad Weber
Three Questions with Konrad Weber
Konrad Weber is a strategy consultant, foresight expert and process moderator.
What's the most important question right now?
How far should we engage with AI platforms to reflect tomorrow’s user needs, and where do we deliberately hold back to keep even a small degree of independence and self-determination? The tension is clear: high-quality, verified journalism will increasingly struggle to compete with AI-generated, highly personalised content. Strategy work in publishing companies means naming that boundary and revisiting it: What do we leave intentionally to AI platforms, and what do we keep firmly in our own hands to ensure integrity, capacity to act, and brand strength?
What's one fact about AI that everyone should know?
As a strategy consultant, I get to see inside many companies. And in addition to the usual nervousness in recent months, there has been an incredible belief in efficiency regarding AI developments. This made me all the more aware of the findings of a new study by Stanford’s Social Media Lab in cooperation with BetterUp, which prove that the uncontrolled use of AI results in up to two hours of additional work per task to check and clean up this AI content. And this so-called ‘workslop’ not only triggers expensive write-offs, but also leads to corroded trust: 53% of employees feel annoyed when they get workslop; about half see the sender as less creative, capable, and reliable; 42% rate them as less trustworthy, and over a third even less intelligent. That’s not just lost time — it’s cultural debt.
What future are you looking forward to?
A future where leadership makes room for vision again and treats strategic foresight as a core operating habit, not an off-site ritual. That means dedicated time and budget for structured future discussions; a quarterly cadence of scanning and scenario-building; an assumptions log we update when the world changes; and decisions framed as hypotheses and bets. Executive boards examine a range of future scenarios rather than individual plans, which they then scrap again just days after the decision has been made. Teams are measured on learning speed, robustness across scenarios, and the ability to pivot with evidence. In short: institutionalised imagination with accountability, so we build what’s next on purpose, not by accident.
Hands on: Anthropic’s Claude has a new feature called Skills. Think CustomGPTs, but for Claude, and more complicated. Skills are a set of instructions and knowledge to use repeatedly for consistent output. It’s basically a ZIP file containing textfiles that you upload to Claude, in Settings under Capabilities.
If you’re getting confused, you are not alone. I think it works like this: Prompts are one-off instructions, Projects have ongoing context for collaboration, MCPs bridge external data and tools. Artifacts are shareable results of prompts. Skills are repeatable systems.
To try it out, AI-influencer Florent Daudens of Hugging Face build a Fact-Checker Skill for Claude. It has detailed instructions how to extract claims from texts, find sources on the web, and rank the confidence. You can get it on GitHub. Upload the ZIP to Claude, and ask to factcheck a text. Here are two screenshots, starting with pinging the fact-checker skill:

This is part of the output:

It’s a solid approach, relying on finding accessible information on the open web.
See you in Chicago? I’m off to the NPA Summit 2025. If you’re around, say hi. Let’s nerd about AI in journalism, or go for a run, or both.
One more thing: A very god post by nico on Threads.

