Ole Reissmann

About · Newsletter

THEFUTURE

When AI Companies Pay the Price

Newsletter sent 9.9.2025 by oler

In this issue: Anthropic pays $1.5 billion to settle the first major AI copyright lawsuit—but it’s not about training, it’s about piracy. CBC’s Rignam Wangkhang on journalism’s existential threat and why we need to get brave. Plus: I tested Google’s SynthID watermarking and learned why spotting AI content is harder than counting fingers.

What we’re talking about: Anthropic has agreed to pay $1.5 billion to settle a copyright lawsuit from authors and publishers. The accusation? Training AI models on half a million pirated books. If the court approves, authors could pocket around $3,000 per book covered in the settlement.

Why this matters: This isn’t just the largest copyright settlement ever—it’s the first from an AI company. OpenAI, Microsoft, Meta, and others are facing similar lawsuits over alleged copyright violations.

Yes, but: A judge in Northern California ruled in June that Anthropic’s training was fair use, because it transforms the books into something new. It’s the illegal acquisition that got them.

What else I’ve been reading:

AI & Journalism Links

AI Search, Users, and News: A trove of data from LM Arena offers a glimpse into user search behavior. A few sources garnered the majority of impressions. (Nick Diakopoulos, Generative AI in the Newsroom)

How Elon Musk Is Remaking Grok in His Image: “Grok’s rightward shift has occurred alongside Mr. Musk’s own frustrations with the chatbot’s replies. He wrote in July that ‘all AIs are trained on a mountain of woke’ information that is very difficult to remove after training.” (New York Times)

How to get less hallucinations: “What often is deemed a ‘wrong’ response is often merely a first pass at describing the beliefs out there. And the solution is the same: iterate the process.” (Mike Caulfield, The End(s) of Argument)

AI bots endlessly scrape publisher sites, causing costly downtime and meager traffic. (Charlotte Tobitt, PressGazette)

Are we brave enough? That’s what Rignam Wangkhang wants to know. He works at CBC News in Toronto. He is thinking about how journalism can survive what he calls an “existential threat.” Spoiler alert: we’d better be.

Three Questions with Rignam Wangkhang

Hands on with SynthID: Now that counting fingers isn’t enough to spot AI-generated or altered images, how do we identify AI content? Companies like Google put invisible watermarks in AI-generated images. But do they actually work?

Shortly after Anna Dittrich took new pictures of me, I got new glasses. So naturally I did what everyone does: asked AI for help. Original on the left, AI edit on the right.

The glasses look fire, obviously. But is this an okay way to use AI? Is it a deepfake? Should we label such images, and how? You might not have noticed, if I hadn’t told you. Which is exactly the point.

You can spot the Google Gemini logo in the bottom-right corner. The skin beneath my eyes is missing texture. Easy fixes: I removed the logo in Photoshop using Generative Fill and restored the skin texture from the original.

But Gemini doesn’t only apply its logo visibly. Google adds invisible watermarks to AI-generated images and videos or AI-edited parts using its own SynthID. It’s a more robust approach than adding metadata, which apps routinely strip away.

I wanted to see if I could trick SynthID: In addition to Photoshop, I ran the picture through Snapseed and applied a grainy filter. And I took a picture of my screen with my phone. Could I escape detection?

I got early access to Google’s SynthID Detector. In all but one image, it suspected mischief but ultimately was “unsure.” Google is wary of false positives and rather errs on the safe side. Only the screenshot came out clean: “Not made with Google AI.”

Another approach works from the opposite direction: content authenticity reverses the burden of proof. With C2PA, images get a cryptographically-signed certificate from a trusted source to prove origin or edit history. (I think of it like SSL certificates: the transmission is safe, but the content could just be anything.)

Some images might carry watermarks signaling AI generation, others content credentials vouching for authenticity, but most will have neither.

One more thing: What if the cold, calculated computer in “2001: A Space Odyssey” were a chatty, people-pleasing chatbot? Swiss comedian Patrick Karpiczenko made a video about it.

This is THEFUTURE.

Subscribe to THEFUTURE

Get this newsletter in your inbox. For free.

The previous issue is You’re Holding It Wrong: The ChatGPT Edition, the next issue is Worldviews Wrapped in Algorithms.