Lisa-Marie Eckardt

Lisa-Marie Eckardt is a researcher at TU Dortmund University, Institute and School of Journalism, working on Data + AI, Algorithmic Accountability, and fact-checking.

Is there a quote that's on your mind?

At SXSW in Austin/Texas I joined a session called "Reclaiming our Humanity in the Age of AI" where Karen Hao, author of Empire of AI, and Timnit Gebru, founder of Distributed AI Institute, shared some interesting thoughts about the ideology behind AI. "'But the AI God solves it all', that is what they say – and when it comes to ethics, they say the agent will solve that and tell you which model to use", said Gebru, a former Google engineer. "I was very confused for a long time, then I understood that this is almost like a secular religion where there are true believers –they think, we have to build the machine god, but we might accidentally build the machine devil." She referred to an ideology that has been pushed by a certain group of billionaires and conservatives in Silicon Valley, believing that an AGI or ASI will eventually control humanity. When Hao wrote her book, she was afraid to sound like a conspiracy theorist. But resistance is rising, according to a survey 80 percent of US-Americans are for regulation, she said. "We should think first, what are the problems we want to solve and then what technology we use – and it might include AI, but there also might be no technology involved."

Are we taking AI seriously enough?

Unfortunately not. "AI companies act like empires", Karen Hao explained the title of her book at SXSW. "Journalists should hold them accountable like any other power." But many journalists are not critical enough and some get bribed, she said. In many sessions at SXSW there were hardly any critical questions. When billionaire Mark Cuban claimed that LLMs could not spread misinformation because people would stop using it, no one objected. Though the session was called "Can Media Survive AI? The fight for public trust" and Cuban said that LLMs will always need new information, there was no discussion on the conditions.

What future are you looking forward to?

I hope to see more AI literacy among journalists, but also in other fields. At TU Dortmund University we teach students in journalism, statistics, data science and computer science about algorithmic accountability. In our seminar interdisciplinary teams worked on small experiments investigating how LLM chatbots inform about elections, gender biases in AI image generators, the rabbit hole effect on short-content platforms and the loss of trust through AI-generated pictures and videos. I also hope to see more regulation of these technologies. But there are many interesting examples of how to use AI in journalism, as well. In the SXSW-Session "AI News that‘s fit for print", Zach Seward, editorial director of AI initiatives at The New York Times, showed how his team uses AI for investigative reporting (e.g. the Epstein files) – their main principle: "Start with the why, not with AI."