In this issue: Anthropic pays $1.5 billion to settle the first major AI copyright lawsuit—but it’s not about training, it’s about piracy. CBC’s Rignam Wangkhang on journalism’s existential threat and why we need to get brave. Plus: I tested Google’s SynthID watermarking and learned why spotting AI content is harder than counting fingers.
What we’re talking about: Anthropic has agreed to pay $1.5 billion to settle a copyright lawsuit from authors and publishers. The accusation? Training AI models on half a million pirated books. If the court approves, authors could pocket around $3,000 per book covered in the settlement.
Why this matters: This isn’t just the largest copyright settlement ever—it’s the first from an AI company. OpenAI, Microsoft, Meta, and others are facing similar lawsuits over alleged copyright violations.
Yes, but: A judge in Northern California ruled in June that Anthropic’s training was fair use, because it transforms the books into something new. It’s the illegal acquisition that got them.
How Elon Musk Is Remaking Grok in His Image: “Grok’s rightward shift has occurred alongside Mr. Musk’s own frustrations with the chatbot’s replies. He wrote in July that ‘all AIs are trained on a mountain of woke’ information that is very difficult to remove after training.” (New York Times)
How to get less hallucinations: “What often is deemed a ‘wrong’ response is often merely a first pass at describing the beliefs out there. And the solution is the same: iterate the process.” (Mike Caulfield, The End(s) of Argument)
Are we brave enough? That’s what Rignam Wangkhang wants to know. He works at CBC News in Toronto. He is thinking about how journalism can survive what he calls an “existential threat.” Spoiler alert: we’d better be.
Three Questions with Rignam Wangkhang
Rignam Wangkhang guides the responsible integration of generative AI at CBC News, Canada’s public broadcaster.
What's the most important question right now?
How to ensure journalism survives in the AI era. I truly believe the industry as we know it is under existential threat. I think this presents an opportunity to radically change how we think about doing journalism and what its purpose is. Threat and opportunity are often intertwined. The same pressure that endangers you can also create the conditions for bold change, which has been necessary in the journalism industry for a long time.
What's one fact about AI that everyone should know?
There’s a lot of talk about an AI bubble, whether AGI will ever happen, or if any of this tech is even useful. The truth is, even if scaling laws hit a wall and progress stopped today, we’d still have years of work ahead figuring out how to use and manage the technology as it is—especially in journalism. We’d best get to work.
What future are you looking forward to?
I look forward to a future where media companies and journalists collaborate, try crazy new things, and say yes to ideas at the bleeding edge of AI, ethically, in line with their values, and on their own terms. We are at an interesting time, for better or worse. The media industry has barely begun to explore the opportunities that lie beyond LLMs and closed-source models, held back more by fear, limited time, and a lack of imagination than by what’s actually possible. There’s plenty of doom and gloom out there, but let’s not forget to have some fun along the way.
Hands on with SynthID: Now that counting fingers isn’t enough to spot AI-generated or altered images, how do we identify AI content? Companies like Google put invisible watermarks in AI-generated images. But do they actually work?
Shortly after Anna Dittrich took new pictures of me, I got new glasses. So naturally I did what everyone does: asked AI for help. Original on the left, AI edit on the right.
The glasses look fire, obviously. But is this an okay way to use AI? Is it a deepfake? Should we label such images, and how? You might not have noticed, if I hadn’t told you. Which is exactly the point.
You can spot the Google Gemini logo in the bottom-right corner. The skin beneath my eyes is missing texture. Easy fixes: I removed the logo in Photoshop using Generative Fill and restored the skin texture from the original.
But Gemini doesn’t only apply its logo visibly. Google adds invisible watermarks to AI-generated images and videos or AI-edited parts using its own SynthID. It’s a more robust approach than adding metadata, which apps routinely strip away.
I wanted to see if I could trick SynthID: In addition to Photoshop, I ran the picture through Snapseed and applied a grainy filter. And I took a picture of my screen with my phone. Could I escape detection?
I got early access to Google’s SynthID Detector. In all but one image, it suspected mischief but ultimately was “unsure.” Google is wary of false positives and rather errs on the safe side. Only the screenshot came out clean: “Not made with Google AI.”
Another approach works from the opposite direction: content authenticity reverses the burden of proof. With C2PA, images get a cryptographically-signed certificate from a trusted source to prove origin or edit history. (I think of it like SSL certificates: the transmission is safe, but the content could just be anything.)
Some images might carry watermarks signaling AI generation, others content credentials vouching for authenticity, but most will have neither.
One more thing: What if the cold, calculated computer in “2001: A Space Odyssey” were a chatty, people-pleasing chatbot? Swiss comedian Patrick Karpiczenkomade a video about it.
This is THEFUTURE.
Subscribe to THEFUTURE
Get this newsletter in your inbox. For free.
You want to join THEFUTURE. Please check your mail for confirmation.