The New Yorker is doing a really great job right now of pushing the conversation about AI and knowledge professions in a smart, forward-thinking direction. Latest example: An article about the humanities by D. Graham Burnett.
Tags ethics (65)
“They may feel that AI has come for them, and so their reaction (and therefore their language) is visceral and raw“, writes David Caswell about how some journalists are getting defensive and what conversation we need now. (Reuters Institute)
AI is a Cult, They Want to Create God and Kill Journalism
That's the narrative. It goes like this: There's a group of powerful people in Silicon Valley who believe in effective altruism, want to live forever, and might be susceptible to fascism. They're not only building AI but believe in a super-intelligence that will have no bias and all the answers. So why not get rid...
AI is disrupting business models, warns Joshua Rothman: “We could be left with A.I.-summarized wire reports, Substacks, and not much else.” At the same time, he finds working with AI search “efficient, fun, and intellectually stimulating.” (The New Yorker)
Judicial panel overrides author preferences, bundles diverse copyright claims against AI companies into single NY proceeding. (Ella Creamer, The Guardian)
The doomsday fantasy of effective altruists: The “AI in 2027” report imagines wild scenarios. Kevin Roose is less impressed. (New York Times)
OpenAI’s latest Ghibli meme trend brazenly exploits Miyazaki’s longstanding criticism of AI. (Brian Merchant, Blood in the Machine)
Preparing a feast only to have it eaten by ghosts who don’t leave a tip: Wikipedia plans to regulate bot access and wants new attribution guidelines for web, apps, voice assistants, and LLMs. (Wikimedia)
We see clunky AI applications and resist the hype – this article makes the argument that we’re not caring enough about it. (Joshua Rothman, The New Yorker)
Dystopian vision or pragmatic future? Newsquest’s AI-powered content creators stir fears of eroded journalistic standards. (Bron Maher, Press Gazette)
No Permission, no pay: How my Book became AI training fodder
I was at the Leipziger Buchmesse (Leipzig Book Fair) to talk about AI. First off: I was robbed. Supposedly, Meta (and probably others) trained its AI using LibGen, a shadow library with millions of books and articles. A book that Christian Stoecker, Konrad Lischka and I wrote 13 years ago is part of the LibGen...
Inside Google’s Two-Year Frenzy to Catch Up With OpenAI: Late nights, layoffs—and lowering guardrails (Paresh Dave and Arielle Pardes, Wired)
“Liberal AI Grok Attacks Trump, Turns on Creator Musk in Shocking Betrayal”
What happens when you cram all the world's knowledge, or at least what you can find, into a machine? The machine gets a sense of what's considered normal. I know, loaded term. Let's call it a baseline. This even applies to Elon Musk's supposed super AI, Grok 3. "Should a U.S. president say 'He who...
Twitter’s a Dumpster Fire, So Why Is It So Hard to Leave?
Just like Bitcoin, Bluesky is having a moment right now. But for the opposite reason. After Elon Musk and Trump won the US election, the classic "Why You Should Really, Truly, Absolutely Quit Twitter This Time" posts are making the rounds again. But here I am, still hesitating to hit that "Deactivate Account" button.Not because...
Large Language Models Reflect the Ideology of their Creators (Maarten Buyl, Alexander Rogiers, Sander Noels, Iris Dominguez-Catena, Edith Heiter, Raphael Romero, Iman Johary, Alexandru-Cristian Mara, Jefrey Lijffijt, Tijl De Bie, arxiv)