
- Do you want to have a chat with the host of your favorite podcast?
- Generate a trailer based on emotionally engaging scenes from a movie?
- Give AI your video material and get a first cut in Premiere, complete with voice-over?
- How about checking claims made in a video stream in real time?
Crazy. That’s my takeaway from AI for Media Network’s hackathon at Google’s Munich office.
Geo-coordinates were gained from news articles to produce maps with markers and overlaying text. Illustrations were generated and animated to help tell a story. Vertical Stories built from 16:9 videos. Videos dissected into helpful chapters.
You can argue all day about AI. Or assume that something must by now be ~somehow~ possible with ~some kind~ of AI.
But to get your hands dirty, build data pipelines, iterate prompts, and have a working prototype after mere hours?
Crazy. I’ve witnessed Google’s 10X space brimming with ideas and enthusiasm. Not everything worked out of the box. But most, if not all teams had plans to keep on building and improving their ideas.
Not only did attendees create sensible, time-saving tools for newsrooms, but they also reimagined user interfaces: Audiences might no longer have to accept a one-size-fits-all broadcast, but could tailor the format of delivery of news to their liking.
Lilian Dammann, Uli Köppen, Paola Sunna, Regine Gatzka, Steven Mc Auley, and I had the awful job honor of picking three winners.
Thanks for having me, and thanks everyone for joining in and sharing! I’m on my way back to Hamburg, full of ideas. If you think you’re missing out by not attending hashtag#OMR25 or hashtag#WNMC2025, you’re already behind by skipping hashtag#AIforMediaHackathon.
(Also on LinkedIn, with some secret screenhots.)