The term for pasting raw, unread AI output into a conversation, shifting the work of reading, verifying, and distilling onto whoever receives it. It's kind of rude, really, with real costs: eroded trust and frustration for recipients as the behavior becomes more common. The fix: read it, verify it, cut it down, disclose it, and...
Tags ethics (65)
Grammarly “cloned” Julia Angwin, Stephen King, and Neil deGrasse Tyson as AI editors, without asking. Angwin is now suing. The feature is gone, the apology is filed, and the AI Julia apparently gave bad advice. (Miles Klee, Wired)
An Ars Technica story about an AI agent writing a hit piece on a human contained AI-fabricated quotes attributed to the human. The author was sick, rushing, and used ChatGPT. The article was pulled. (Emanuel Maiberg, 404 Media)
Substack hosts, algorithmically promotes, and takes a revenue cut from newsletters openly pushing Nazi ideology, Holocaust denial, and white supremacy. (Geraldine McKelvie, The Guardian)
What is Claude? Anthropic doesn’t know either. Gideon Lewis-Kraus spent months inside the company, watching researchers try to understand their own AI. A good example of how to write about this technology without falling for the hype or waving it all away. (The New Yorker)
Reinforcing competence: AI companies are paying thousands of lawyers, consultants, and other professionals through startups like Mercor and Surge to write out in detail what counts as a job well done in every conceivable context. (Josh Dzieza and Hayden Field, The Verge)
Sci-fi author and digital activist Cory Doctorow on the AI bubble: “The promise AI companies make to investors is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company. (…) But AI can’t do your job.”
“The Washington Post last week rolled out AI-generated podcasts, ignoring internal reviews that found errors in AI scripts, like fabricated quotes, and had deemed more than two-thirds of them unpublishable.” (Max Tani, Semafor)
In left-leaning media outlets like n+1, resistance against AI is taking shape: “When we use generative AI, we consent to the appropriation of our intellectual property by data scrapers. We stuff the pockets of oligarchs with even more money. (…) There’s still time to disenchant AI, provincialize it, make it uncompelling and uncool.”
A visit to the nonprofit that powers most of today’s AI training: Common Crawl ingests the web, journalism and all, unapologetically. The article falls a bit short on fair use and other archives, but it’s a good read. (Alex Reisner, The Atlantic)
The Top Challenges of Using LLMs for Content Moderation: While it’s from a corporate blog of a vendor of AI-enabled moderation, I found it to be useful, particularly the common-pitfalls-section. (Alice Hunsberger, Musubi)
They speak our language, but it does not appear to constrain their thought: “AIs tend to express liberal, secular values even when asked in languages where the typical speaker does not share those values.” Fascinating research by Kelsey Piper.
AI-generated ‘poverty porn’: Prominent NGOs use biased, sensationalized visuals in global health campaigns, perpetuating harmful tropes about the poor. (Aisha Down, The Guardian)
What if we let ChatGPT, Claude, and Gemini cosplay as famous authors? Would anyone notice? Would critics swoon? Would readers care? Spoiler alert: finetuning works really good, a new study finds. (Rosalia Anna D’Agostino, LinkedIn)

