“The New York Times has cut ties with a freelance journalist after discovering he used artificial intelligence to help write a book review that echoed elements of a review of the same book in the Guardian.”
Tags trust (48)
Clean, precise prose is now a liability. Non-native English speakers and autistic writers are being flagged as AI because they write too well. Meanwhile, the actual bots are getting sloppier on purpose, reports Emma Alpern in New York Magazine. Glad this is getting more attention.
“Human in the loop” sounds like oversight. Our cognitive biases turn it into a rubber stamp. Damon Kiesow on how to make checking AI output hurt a little, like red-teaming and forced rewrites. (Working Systems)
The forensic heatmap supposedly exposing a photo as fake? Also fake. Forensic cosplay to create doubt about real photos. (shirin anlen, Mahsa Alimardani, Tech Policy Press)
Journalist suspended for publishing AI-hallucinated quotes: A former editor-in-chief of NRC used ChatGPT, Perplexity, and NotebookLM for his newsletter, but didn’t verify quotes. He had warned colleagues about exactly this. (Dan Milmo, Guardian)
Can you tell which passage was written by AI? This New York Times quiz is humbling either way. (Kevin Roose and Stuart A. Thompson)
Grammarly “cloned” Julia Angwin, Stephen King, and Neil deGrasse Tyson as AI editors, without asking. Angwin is now suing. The feature is gone, the apology is filed, and the AI Julia apparently gave bad advice. (Miles Klee, Wired)
An Ars Technica story about an AI agent writing a hit piece on a human contained AI-fabricated quotes attributed to the human. The author was sick, rushing, and used ChatGPT. The article was pulled. (Emanuel Maiberg, 404 Media)
Is the image even real? Can we verify the facts?
Those questions framed the conversation at last Thursday's AI for Media Network gathering in Hamburg. 120+ representatives from media organizations and academia met to discuss AI in verification and research. It was the first time the event was hosted at SPIEGEL-Gruppe's Hamburg offices. Gerret von Nordheim, deputy head of SPIEGEL's fact-checking department, presented our in-house...
Links to a talk on trust and AI
Taken out of context without the talk at AI for Media somewhat incomprehensible. Google DeepMind: SynthID olereissmann.com: Hands on with SynthID Guardian: AI images of Maduro capture reap millions of views on social media Guardian: White House posts digitally altered image of woman arrested after ICE protest MDR: Sächsische Polizeigewerkschaft verteidigt KI-Bild Samira El Ouassil:...
Substack hosts, algorithmically promotes, and takes a revenue cut from newsletters openly pushing Nazi ideology, Holocaust denial, and white supremacy. (Geraldine McKelvie, The Guardian)
Moltbook, the viral “social network for bots,” looked like a glimpse of the AI agent future. It wasn’t. And some of it was humans shitposting. (Will Douglas Heaven, MIT Technology Review)
What is Claude? Anthropic doesn’t know either. Gideon Lewis-Kraus spent months inside the company, watching researchers try to understand their own AI. A good example of how to write about this technology without falling for the hype or waving it all away. (The New Yorker)

