This is not one of those “superintelligence is just around the corner” pieces, but rather asks if maybe we’re already learning too much about our own brains, demystifying thinking and its implications. You can’t know too much, can you? (James Somers, The New Yorker)
Posts about read
Make this your weekend read: Designer Frank Chimero talks about AI and specifically vibe coding. “Time saved is not strength gained,” he says, and gets into how Brian Eno works with machines for his music. And then there is a nice riff on the Ghibli movie “Spirited Away”. Recommended!
The BBC wants to make its AI use transparent to users – and is coming up with a new icon: Sparkles out, nondescript, boring, unobtrusive hexagon in.
How is the agentic web going? Amazon filed a lawsuit demanding Perplexity stop allowing its AI browser Comet to make purchases for users. (Shirin Ghaffary and Matt Day, Bloomberg)
A visit to the nonprofit that powers most of today’s AI training: Common Crawl ingests the web, journalism and all, unapologetically. The article falls a bit short on fair use and other archives, but it’s a good read. (Alex Reisner, The Atlantic)
New paper: Which sources does ChatGPT provide in Germany when asked for the latest news? Main takeaway: Very different results depending on whether you use the customer chat interface or the API. Customers get more results from content partners, the API is more diverse or even fringe.
The Top Challenges of Using LLMs for Content Moderation: While it’s from a corporate blog of a vendor of AI-enabled moderation, I found it to be useful, particularly the common-pitfalls-section. (Alice Hunsberger, Musubi)
They speak our language, but it does not appear to constrain their thought: “AIs tend to express liberal, secular values even when asked in languages where the typical speaker does not share those values.” Fascinating research by Kelsey Piper.
Channel4 had a segment about AI-driven job loss presented by an AI host—and the Guardian published a hilarious (if not predictably) takedown. (Stuart Heritage, The Guardian)
Why you shouldn’t use AI browsers like Atlas or Comet with logins to email, SharePoint or any other online service right now—even though that’s a major part of what makes an AI browser interesting. (Simon Willison’s Weblog)
Is fine-tuning having a moment again? After being overshadowed by bigger, shinier models, it’s creeping back into the conversation—and this time, it might actually stick. (Kevin Kuipers, Sota)
AI-generated ‘poverty porn’: Prominent NGOs use biased, sensationalized visuals in global health campaigns, perpetuating harmful tropes about the poor. (Aisha Down, The Guardian)
The current AI browser landscape and what it means for publishers: How content is used, and whether compensation or licensing is on the table. (Bertrand de Volontat, Nordot)
What if we let ChatGPT, Claude, and Gemini cosplay as famous authors? Would anyone notice? Would critics swoon? Would readers care? Spoiler alert: finetuning works really good, a new study finds. (Rosalia Anna D’Agostino, LinkedIn)
“Should news publishers build for ChatGPT’s 800M users? Dug into OpenAI’s new Apps SDK documentation and the implications are fascinating.” (Florent Daudens, LinkedIn)