Those questions framed the conversation at last Thursday’s AI for Media Network gathering in Hamburg. 120+ representatives from media organizations and academia met to discuss AI in verification and research. It was the first time the event was hosted at SPIEGEL-Gruppe’s Hamburg offices.

Gerret von Nordheim, deputy head of SPIEGEL’s fact-checking department, presented our in-house Fact Check Tool, an AI-powered application designed to support editorial verification. About 70 percent of the corrections SPIEGEL has had to publish in the past could have been caught before publication with the tool.

SPIEGEL researcher Susmita Arp shared findings from a social media investigation in which AI was used to analyze more than 6,000 videos from Islamist influencer accounts.

Deputy editor-in-chief Thorsten Dörting

Jan Eggers showed what is possible today, creating videos – and convincingly faking voices with one of the latest models that runs locally on a laptop.

A central debate of the day was whether AI-generated media can still be reliably detected using technical means. Three German startups, Neuramancer AI Solutions GmbH, Gretchen AI and Valid presented their approaches.

Isabel Lerch steered a lively debate between Anika Gruner, Jana Heigl, Jakob Tesch and Stefan Voss. Practical reports from newsrooms were shared, including Bayerischer Rundfunk, the German news agency dpa, and others.

Many thanks to everyone who attended for the open exchange and productive conversations, and to Bernd Oswald of the hashtag#AIforMedia Network and BR – Bayerischer Rundfunk AI chief Uli Köppen for bringing the event up north.

Photos by Isabela Pacini / DER SPIEGEL. This post is also on LinkedIn. If you’re looking for the footnotes/links to my keynote, they’re here.