Ole Reissmann

About · Newsletter

26 AI and Journalism Links for 2026

posted 7.1.2026 by oler

Look, I get it. Your inbox is drowning in trend reports. Another one just dropped. And another. Plus seventeen meta-analyses of the meta-analyses, and at this point everyone’s just rage-feeding the whole mess into NotebookLM like it’s some kind of AI garbage disposal. It’s exhausting. Truly.

I’ve done the scrolling. Consider this your cheat code. The 26 links I find most helpful. The articles worth our rapidly diminishing attention span. The necessary prerequisites for an informed debate. Everything one would need to sound smart at the next editorial meeting.

Non-Negotiables

1. How will AI reshape the news in 2026

The Reuters Institute spoke with people who are building the future of news, from the BBC to the Wall Street Journal, from Semafor to Süddeutsche Zeitung, from the New York Times to Nikkei. What’s the most significant way AI will reshape journalism this year? Their answers cluster around five themes: Audiences will increasingly access news through AI. There will be increased demand for verification work. Automation and agents will reshape newsrooms. Newsrooms will upskill and build AI infrastructure. And AI will further empower data journalists. If you read just one report, make it this one.

2. The Wayfinder 2025-26 Review/Preview Report

Ezra Eeman, Strategy and Innovation Director at NPO, the Dutch Public Broadcaster, connects the latest developments in AI. 113 slides might seem daunting, but it’s an excellent overview of everything that’s happening right now. We’ll certainly see Ezra on the conference circuit again this year – keynotes, panels, the usual – so consider this a preview: “Across the chapters, familiar assumptions are inverted. Browsing gives way to asking. Content turns fluid. Creation scales while stability erodes. Attention fragments even as platforms consolidate. Trust weakens as dependency grows. None of these shifts are absolute, but together they point in a clear direction.”

3. AI eats the world

Twice a year, tech analyst Benedict Evans produces a presentation exploring macro and strategic trends in the tech industry. This one is from November. Over 90 slides he lays out the economics (trillion‑dollar capex, chip and power constraints), the open questions about product‑market fit, and where value might actually land: scale, proprietary vertical data, UX/distribution or classic software plays. His presentations help me better understand the market and the bubble, to see what we’re up against.

4. To compete with machines, we become more human

My 2026 hot take, the one I published at Nieman Lab, is that the future of journalism isn’t algorithmic. Instead, it’s deeply, stubbornly human. When machines can generate stories faster, cheaper, and slicker, the only real counter is connection. It’s about eye‑level communication. Videos, podcasts, events. More community. More intent. The opposite of chatbot energy.

The Money Problem

5. Strategic Business Model Choices in the Age of AI Search

FT Strategies, the consultancy arm of the Financial Times, has put out a framework for how publishers should think about survival in the age of AI search. Their thesis: it’s a 2×2 matrix. Consultants gotta quadrant. One axis is distribution (owned vs. embedded in platforms), the other is audience need (information vs. entertainment). Four quadrants, four archetypes: Niche Specialist, Intelligence Provider, Voice-led Brand, Mass Reach Publisher. The catch is, you can’t really pick just one. But apparently, you also can’t be half-decent at all four. “Success depends on intentionally designing how content meets audience needs, across owned and platform environments.”

6. The Brutal Economics of Liquid Content

Harvard Fellow Shuwei Fang isn’t sugarcoating it: in the future, you’re either small and premium or massive and optimized. Nothing survives in the middle. “Only organizations with massive scale or premium brand differentiation can survive these economics.” In this world, the “article” – the thing we used to think was the product – becomes disposable. Fang’s argument: “What if news media were to let go of the artifact as the product and productize the process instead?” Meaning, the way you make the thing becomes the real value.

7. The End of Publishing as We Know It

The extinction event: Alex Reisner is tracking AI’s assault on publishing. Google’s AI Overviews have already cut traffic to outside websites. The CEO of DotDash Meredith is preparing for a “Google Zero” scenario. One former Business Insider staffer says: “Business Insider was built for an internet that doesn’t exist anymore.” Publishers are trying to fight back through lawsuits and licensing deals, but the power imbalance is brutal – books have reportedly been licensed for just a couple hundred dollars each. And when Google’s Sundar Pichai was asked about compensating writers, his answer was: “There’ll be a marketplace in the future – there’ll be creators who will create for AI. People will figure it out.”

8. Feeding the Machine

The only people actually making money in AI: Josh Dzieza and Hayden Field have mapped the booming market for AI training data. Labs have exhausted all the easily accessible material. Now they’re paying billions for experts to write “rubrics”, granular checklists that break down every conceivable task into verifiable steps. If AGI were actually coming, models should be able to generalize. Instead, labs are spending more on bespoke data than ever, tailored to increasingly specific applications.

9. The Company Quietly Funneling Paywalled Articles to AI Developers

The back door: Alex Reisner (again) reports on Common Crawl, the little-known nonprofit that’s been scraping the internet for over a decade and feeding it to AI companies. What he found reads like the origin story of an AI villain. The group insists it “doesn’t go behind paywalls,” but its scraper apparently skips the code that checks for subscriptions. Publishers including the New York Times and the Danish Rights Alliance have requested content removal and been told it’s 50%, 70%, 80% complete – but Reisner found none of the content files have been modified since 2016. The nonprofit’s search tool shows “no captures” when the content is actually there.

Newsroom Questions

10. How The Times Assessed That Photo From Trump of Maduro in Handcuffs

The verification problem, in real time: When Trump posted a photo claiming to show Maduro in handcuffs aboard a U.S. warship, the New York Times had to make a call. The image looked odd. Cropped to an unusual shape, low quality, like a photo of a printout. AI-detection tools flagged uncertainty but nothing definitive. Trump has a history of sharing AI-generated images. The solution? Publish the Truth Social post itself, not the isolated image. As the Times’ director of photography put it: “Like so much in journalism, it is up to us – human editors – to make judgment calls.”

11. How people think about AI’s role in journalism and society

What the scientists say: The Reuters Institute found weekly AI usage nearly doubled in a single year, from 18% to 34%. That’s internet‑in‑the‑’90s‑level growth speed. People aren’t just using it to make stuff anymore, they’re using it to find stuff. “We document rapid growth in the use of a new set of tools bound to impact the discovery of information…” – basically, AI has already wormed its way into how we learn, browse, and doomscroll. “This will affect the news media irrespective of whether people use generative AI for getting news specifically.” And here’s the paradox: the public thinks AI‑generated news will be cheaper and less trustworthy – but if the distribution system itself is shifting underneath, who cares what people think? They’ll still be reading whatever the feed decides to show them.

12. How AI will upend the news

Semafor’s Gina Chua says journalists are obsessing over the wrong apocalypse. Automation, ethics, IP are all valid panic buttons, but the real shift is in what audiences expect. “Generative AI promises to revolutionize how people interact with information – how they’ll come to it, what they’ll expect from it, and what they’d do with it.” If people get used to stories that bend to their attention spans and knowledge levels, what happens to the classic 800‑word feature that took three editors and a week to perfect? As Chua puts it: “You go to war with the audience you have, not the audience you might wish to have.”

13. Will A.I. Save the News?

Joshua Rothman in The New Yorker admits he didn’t really read the news until 9/11. Now, as a forty-five-year-old journalist, he’s experimenting with letting AI reshape how he consumes it. He asks ChatGPT to summarize newsletters he’s fallen behind on. He talks to it about tariffs while cleaning the house. He follows stories “backward in time” instead of just forward. The results, he finds, are often better than traditional news reading. But he’s clear-eyed about what this means for the industry: “We could be left with A.I.-summarized wire reports, Substacks, and not much else.” Fewer than 50,000 people work as journalists in the U.S. – less than the number of DoorDash drivers in New York City. That small group is charged with generating an authoritative daily account of a bewildering world. AI might help them do it better. Or it might make them obsolete.

The Human Case

14. The Reverse Centaur’s Guide to Criticizing AI

AI as a growth story: Sci-fi author and digital activist Cory Doctorow offers a sharp take on the AI bubble without sliding into full-on Luddite territory: “The promise AI companies make to investors is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company. (…) But AI can’t do your job.”

15. It’s OK to be a Luddite!

The moral high ground: A small anti-AI movement is coalescing, and literary magazine n+1 is one of its homes: “When we use generative AI, we consent to the appropriation of our intellectual property by data scrapers. We stuff the pockets of oligarchs with even more money. (…) There’s still time to disenchant AI, provincialize it, make it uncompelling and uncool.”

16. Will the Humanities Survive Artificial Intelligence?

A dispatch from inside the classroom: Princeton historian D. Graham Burnett assigned his students to have conversations with AI about the history of attention. Students led chatbots through Ignatian spiritual exercises. They trapped them in Socratic aporias. One student told him afterward: “I don’t think anyone has ever paid such pure attention to me and my thinking and my questions… ever.” Another felt crushed – what’s the point of his life if the machine can do everything better? But a third invoked Kant’s sublime: first you’re dwarfed by something vast, then you realize your consciousness can grasp that vastness. Burnett’s conclusion is counterintuitive: this isn’t the end of the humanities, it’s a forced return to their actual purpose. “To be human is not to have answers. It is to have questions – and to live with them. The machines can’t do that for us. Not now, not ever.”

17. I’m Kenyan. I Don’t Write Like ChatGPT. ChatGPT Writes Like Me.

The fossil record of British colonial education: Kenyan writer Marcus Olang’ keeps getting told his work “sounds like ChatGPT.” But the formal English he was drilled in isn’t algorithmic. AI detectors flag text that’s too predictable and too uniform. But that’s exactly what Kenyan students were trained to produce. “You spend a lifetime mastering a language, adhering to its formal rules with greater diligence than most native speakers, and for this, a machine built an ocean away calls you a fake.”

18. Is it okay?

The moral question: Author Robin Sloan sets aside copyright law and the “but humans learn from reading too” defense, then asks what’s actually at stake when you train AI on Everything. His answer: “There has never, in human history, been a way to operationalize Everything.” If AI’s primary application is to crowd out human composition? “No, it’s not okay. Here is the ultimate act of pulling the ladder up behind you, a giant ‘fuck you’ to every human who ever wanted to accomplish anything.” But if it delivers profound public good, cures for diseases, the Babel fish of translation? Probably okay. Image generators that just make more images? “They pee in the pool.” Code generation? Already past the threshold of public good. The frontier models? Probably okay, their applications have become more about processes than end products. But here’s the thing: “The code only works because of Everything.”

19. It’s rude to show AI output to people

The etiquette we need: Writing used to carry an innate “proof-of-thought.” If text existed, a human had spent time making it. AI broke that. Now any text can be slop. The author’s rule: AI output can only be relayed if you’ve adopted it as your own or the recipient explicitly consented. “I asked ChatGPT and here’s what it said”? Extremely rude. “I had a helpful chat with ChatGPT and can share the log if you want”? Maybe fine.

Behind the Curtain

20. MythBusting Large Language Models

The illusion, explained by Joseph Lochlann Smith, an engineer at The Guardian: Almost nothing works the way the chatbot interface suggests. LLMs don’t “hold conversations”, they predict likely continuations of text. They don’t take text as input, they see tokens represented as vectors. They don’t output text, they output probability distributions. Their remarkable range of abilities? Almost all arise from one simple training objective: guess the next word. And they can’t remember anything, every prediction is completely stateless.

21. A non-anthropomorphized view of LLMs

Security researcher Thomas Dullien is baffled by smart people ascribing almost magical properties to AI. An LLM takes your previous path through word-space, calculates probabilities for the next point, and makes a random pick. That’s it. “Alignment” really means bounding the probability of generating undesirable sequences. Except we can’t specify “undesirable” except by example. The moment people ascribe “consciousness” or “ethics” or “values” to these learned mappings is where he gets lost. “To me, wondering if this contraption will ‘wake up’ is similarly bewildering as asking a computational meteorologist if he isn’t afraid his numerical weather simulation will ‘wake up.'”

22. Why Do A.I. Chatbots Use ‘I’?

ChatGPT says its favorite food is pizza. Claude has a “soul doc” describing its “functional emotions” and “playful wit.” These systems were trained on human writing, so they model humanity better than they model being a tool. Kashmir Hill at the New York Times talks to critics who say that’s a problem. When chatbots present as humanlike, users “attribute higher credibility” to their outputs, even though the machines hallucinate and tell you what you want to hear. “It’s entertaining. But it’s a deceit.” A computer science professor sees hope in history: Banks once put friendly faces on ATMs to ease customer anxiety. “They don’t survive.”

23. Ten Facts Everyone Should Know About AI

So ChatGPT walked into the newsroom, and now everyone’s acting like they’ve discovered fire. Here’s what I wish every journalist (and everyone else) knew about AI. This is my practical introduction, without the hype.

24. An Opinionated Guide to Using AI Right Now

The practical field guide: Ethan Mollick updates his recurring “what to use and when” playbook for late 2025, now that AI isn’t a curiosity anymore, it’s a weekly habit for ~10% of humanity. The core message is unromantic: stop obsessing over prompts, start making smart product choices (free vs. paid tiers, chat vs. agent models), and lean on Deep Research + data connections when the stakes are real. “Play is often a good way to learn what AI can do. (…) Try things and you will learn the limits of the system.”

Stay in the Loop

25. AI For Newsrooms

A constantly growing collection by Sergei Yakupov of AI projects in newsrooms, from guidelines to initiatives. Explore initiatives across newsrooms in 50 countries, read papers and reports, explore tools and guides, or browse AI policies and guidelines.

26. THEFUTURE

Yes, it’s my newsletter. The media newsletter that doesn’t make you want to delete your inbox. It’s interesting, sometimes fun, and, unbelievably, free.

Filed under Blog. The previous entry is “I Don’t Trust AI an Inch When It Comes to Facts”: Matthias Fiedler asked me 22 questions for his newsletter StoryCodes, this is the latest entry in this category.

Subscribe to THEFUTURE

Media landscape is completely unhinged rn and nobody knows what's happening???? Subscribe to THEFUTURE where I pretend to understand it while having a minor breakdown weekly.

Get brainwormed.