I revealed to school kids how to edit AI-generated images so they can’t be detected as such. And I may have let slip a few tricks on how to make ChatGPT your ghostwriter without setting off alarms.
Why? Because students aren’t stupid, and we discussed methods for detecting AI content during Hamburg’s Press Freedom Week with Journalismus macht Schule and #UseTheNews.
It’s not enough to know how to create images with Midjourney and similar tools, or to have a general idea of how they work. It’s also insufficient to know that you can check images on sites like TrueMedia.org. Or look closely at eyes and reflections.

We must understand that technology can be circumvented and that there’s no purely technical solution against deepfakes.
Instead, we need to question the source of an image, video, or text, and consider its intended purpose. Source and message. That’s why I believe we should treat students as equals. In fact, students are already ahead of the curve: When moderator Frederik Fleig asked who knows and regularly uses ChatGPT, every hand in the room went up.
I also cloned my voice and had a conversation with myself. Vapi listens, sends the audio to Deepgram, the transcript goes to ChatGPT, and the response is sent to ElevenLabs – still a bit clunky with a 1.8-second latency, but already impressive fakery.
I not only provided my voice clone with information I wanted to discuss but could also joke: “What, you’re Ole? But I’m Ole!” A year ago, we might have laughed at AI images with fourteen fingers and phantom hands. The joke might be on uns. Welcome to the future.
Thanks for having me, Franziska Görner, Kyra Funk, and TIDE – Hamburgs Bürger:innensender + Ausbildungskanal!
(This post also exists on LinkedIn. Photo by Marcus Brandt/dpa/UseTheNews.)