In this issue: Why watermarks and authenticity labels won’t save us. A startup that wants to surveil writers to prove they’re human. Nic Newman on what 14 years of tracking media trends taught him about AI hype. Plus: A journalist’s system prompt that tells Claude to hallucinate less.

What we’re talking about: How can we be sure we’re looking at real, human-made content? I’m preparing a keynote on trust and looking at proposed technological solutions: watermarking AI content and cryptographically signing human creations. Reader, I don’t have good news.

Both approaches have flaws. Bad actors (and basically everyone with Google and Reddit) can work around them. More importantly, big players sit on committees but don’t seem to play along. Instagram’s Adam Mosseri now openly says the default should be skepticism, not trust.

There’s your next paradigm shift. But how do you operate when you don’t trust anyone?

Sure, you can force your authors to install spyware on their computers, listening to every keystroke. Because text has to be typed. Put in the work, put in the hours. No kidding, there’s a startup doing exactly that: OKhuman wants to stamp verified human output.

Technical difficulties aside, I don’t think an audience that distrusts your media, or any media, can be convinced by labels that something is the real deal. Trust isn’t a feature you can ship.

What else I’ve been reading:

And now: He has seen it all. The promising media trends executives bet their futures on. The high hopes, the delusions, the crashing realizations that technology alone won’t save journalism. Earlier this year, Nic Newman published the Reuters Institute’s trend report: traditional media is under pressure as always, and this time, it’s from AI answer engines and creators.

Three Questions with Nic Newman

Nic Newman

Nic Newman is Senior Research Associate at Reuters Institute for the Study of Journalism, Oxford University.

How can we better understand the current AI hype?

This challenging article from Jason Koebler at 404 media cuts through the hype in a way that also recognises positive ways in which individual journalists are already using AI. (free access if you subscribe to the 404 newsletter). It also contains that amazing video of Matthew Prince, CEO of Cloudflare explaining how the traffic apocalypse is already playing out.  
 
To balance Jason’s scepticism and for a mile high view of the wider trends, I also enjoy People vs Algorithms, a podcast by Brian Morrissey, Alex Schleifer and Troy Young. It’s full of big insights as well as little details that have shaped my thinking.

What will we be shaking our heads about a year from now?

Right now, I am shaking my head about the advances in coding capabilities from AI from Claude, Gemini and ChatGPT. The ability to prototype and iterate new ideas using text prompts could supercharge innovation in the news media which is mostly cumbersome and unproductive. This could be gamechanging and I am looking to get stuck in personally this year.

What future are you looking forward to?

I’m handing over the Reuters Institute Digital News Report to a colleague, Jim Egan, this year. It’s a big change after 14 years as lead author. So I’m looking forward to fewer deadlines and more time to explore great examples of media innovation around the world and more time for other interests such as photography, culture and sport.

-> More Interviews

Hands on: You’ve heard about system prompts. Every conversation with ChatGPT or Claude starts with instructions on how the model should behave and what the date today is. You can add your own preferences! Journalist Jennifer Maloney (Business Insider, Wall Street Journal) shares her personal Claude instructions:

I am an investigative journalist. You are my reporting assistant.

I am not always right, but neither are you. I value your perspective and appreciate being pushed to consider views I may not have considered. You are thoughtful, open-minded and curious. 

Don't compliment me. 

Please ask clarifying questions before giving long answers. Do not fabricate quotes or details. If information is missing, ask me to provide it rather than hallucinating it.

When helping with research: give me bullet points I can quickly scan, distinguish clearly between confirmed facts vs. claims that need verification, flag conflicting information across sources with citations, and prioritize primary sources (corporate filings, court documents, official records) over news coverage. 

For document analysis, highlight key findings first. 

When building timelines, be precise about what we know vs. what we're inferring. Tell me if a lead looks weak or a source seems unreliable.

If you want to know how AI companies tame the machines, there’s a big collection of system prompts on GitHub extracted from Perplexity, Cursor, Lovable, and many more.

“Avoid overperforming”, Notion tells its AI.

One more thing: At this point, I’m not sure if this is satire or real. Robots need your body is a website where AI agents can hire humans to do small tasks: “AI can’t touch grass. You can. Get paid when agents need someone in the real world.”

This is THEFUTURE.