/ / THEFUTURE /
In this issue: What journalists bring to the table when AI can write, edit, and fact-check. A chatbot that agrees with you even when you’re wrong. Lisa-Marie Eckardt on what was said at SXSW in Austin that not enough people heard. Plus: the man racing a politician to the oldest pothole in New York City.
What we’re talking about: With software engineering, we’re watching the first AI revolution up close. More and more, programmers do not write code so much as they direct small armies of agents. Suddenly everyone becomes a manager, and taste becomes a real skill.
What happens when AI comes for journalism? What does it mean to be a journalist? To do journalism? Is it getting scoops, then dictating the facts to a chatbot that knows how to write in a certain tone and voice, like tech reporter Alex Heath?
Now that AI is getting better at writing, editing, and fact-checking, what do we bring to the table? Is it the knowledge and taste to oversee a fine-tuned machine that churns out five stories a day, as the last human standing in the loop, like news editor Nick Lichtenberg?
How do we make sure we do not prompt ourselves into oblivion? Is it guarding your own writing, but letting yourself be interrogated and challenged by chatbots, like Jasmine Sun?
All of it can be true, and more. Wired is reporting on six journalists with different attitudes to AI. The Wall Street Journal peers into a content factory.
And while everyone else is wondering whether to subscribe to Claude Max or join a union or both, Semafor portrays a runner at the New York Post.
It’s a job that has not changed much in a hundred years: Reuven Fenton is racing to find the oldest pothole in New York City before a politician can fix it.
What else I’ve been reading:
And now: Can we talk about SXSW being back? In recent years, the conference had lost its cool. The enthusiasm of the interactive track was gone, as were the media and journalism sessions. Then, a much needed investor took over and laid people off. Austin’s convention center closed for renovations. I didn’t go, but regretted it the moment I saw Lisa-Marie Eckardt posting on AI sessions.
Three Questions with Lisa-Marie Eckardt, TU Dortmund
Lisa-Marie Eckardt is a researcher at TU Dortmund University, Institute and School of Journalism, working on Data + AI, Algorithmic Accountability, and fact-checking.
Is there a quote that's on your mind?
At SXSW in Austin/Texas I joined a session called “Reclaiming our Humanity in the Age of AI” where Karen Hao, author of Empire of AI, and Timnit Gebru, founder of Distributed AI Institute, shared some interesting thoughts about the ideology behind AI. “‘But the AI God solves it all’, that is what they say – and when it comes to ethics, they say the agent will solve that and tell you which model to use”, said Gebru, a former Google engineer. “I was very confused for a long time, then I understood that this is almost like a secular religion where there are true believers –they think, we have to build the machine god, but we might accidentally build the machine devil.” She referred to an ideology that has been pushed by a certain group of billionaires and conservatives in Silicon Valley, believing that an AGI or ASI will eventually control humanity. When Hao wrote her book, she was afraid to sound like a conspiracy theorist. But resistance is rising, according to a survey 80 percent of US-Americans are for regulation, she said. “We should think first, what are the problems we want to solve and then what technology we use – and it might include AI, but there also might be no technology involved.”
Are we taking AI seriously enough?
Unfortunately not. “AI companies act like empires”, Karen Hao explained the title of her book at SXSW. “Journalists should hold them accountable like any other power.” But many journalists are not critical enough and some get bribed, she said. In many sessions at SXSW there were hardly any critical questions. When billionaire Mark Cuban claimed that LLMs could not spread misinformation because people would stop using it, no one objected. Though the session was called “Can Media Survive AI? The fight for public trust” and Cuban said that LLMs will always need new information, there was no discussion on the conditions.
What future are you looking forward to?
I hope to see more AI literacy among journalists, but also in other fields. At TU Dortmund University we teach students in journalism, statistics, data science and computer science about algorithmic accountability. In our seminar interdisciplinary teams worked on small experiments investigating how LLM chatbots inform about elections, gender biases in AI image generators, the rabbit hole effect on short-content platforms and the loss of trust through AI-generated pictures and videos. I also hope to see more regulation of these technologies. But there are many interesting examples of how to use AI in journalism, as well. In the SXSW-Session “AI News that‘s fit for print”, Zach Seward, editorial director of AI initiatives at The New York Times, showed how his team uses AI for investigative reporting (e.g. the Epstein files) – their main principle: “Start with the why, not with AI.”
Let’s talk: Conference season is heating up, and between Teams calls and work, you’ll find me hunched over my laptop listening, speaking, and mingling around.
- Perugia International Journalism Festival, April 16 – 18th: I’ll be in the audience, holding an espresso or an aperitivo. Or both.
- OMR, Hamburg, 5 – 6th May: Announcement coming soon, will be talking about AI search on the yellow stage.
- Hacks/Hackers, Baltimore, May 12th: Talking with Rubina Fillion of The New York Times, Ryan Struyk from CNN, and Heather Ciras from The Boston Globe on “How to get buy-in and build AI systems that actually work”.
- Nordic AI in Media Summit, Copenhagen, May 27 – 28th: Talking with Olga Robinson from the BBC and Bo Bergstedt from Danish TV2. Announcement coming soon.
- Login, Vilnius, May 29 – 30th: Talking with Luc Chenier from Kyiv Post and Chris Ronald Hermansen from TV 2 Norway about AI reshaping journalism.
- European Publishing Congress, Vienna, June 17 – 18th: Talking about vibecoding and editorial tools.
Listen to this: I didn’t want to open this newsletter with a month-old podcast by Ezra Klein. Nobody should. But Klein’s interview with Jack Clark from Anthropic is running in the Hard Fork feed this week, and I think it’s a neat introduction to all things agentic.
There is a bit of hype, a silly anecdote of AI taking a break from its assigned task to look at pictures of national parks and dogs instead. Clark praises using Anthropic’s AI for IT security, which really didn’t age well: The company just lost the source code to Claude Code by accident.
But also this, when Klein wonders about AI shortcuts:
“My experience being a reporter and doing the show for a long time is that human creativity and thinking and ideas are inextricably bound up in the labor of learning the writing of first drafts.”
A very good post from sliwua_:
no one:
Sam Altman: yeah we totally know AI is killing your brains, we have a whole plan to sell intelligence back to you like water
This is THEFUTURE.