Ole Reissmann

About · Newsletter

The New ABC of AI

posted 25.7.2025 by oler

Fakecast, deep tailoring, Luddite: Let’s start a glossary for all things journalism and AI.


AGI or Artificial General Intelligence: It’s mostly a marketing ploy to raise money and avoid regulation. Nobody really knows what it actually means.
Avoid for now

AIO or AI Overviews: Google’s AI summaries that appear above search results, leading—to no one’s surprise—to a massive drop in search clicks. Then there’s Google’s version of Perplexity, called AI Mode.

Alignment: You could argue that chatbots shouldn’t give out detailed instructions on how to kill someone or call themselves MechaHitler. Most AI companies align their models this way, showing them during training what to generate and what not to generate. It’s a bit of an art, really, as we haven’t found a good way to calculate what constitutes undesirable output.

Brainrot: Internet slang for mindless digital consumption. In late 2024 and early 2025, “Italian brainrot” became a TikTok and Instagram meme—AI-generated videos featuring characters like Chimpanzini Bananini speaking in fake Italian gibberish. It’s not slop, it’s… art?
Gen Z only

Consent Manufacturing: That pop-up where your favorite social media platform announces some changes to their terms and conditions? No biggie, your personal data just became training data for AI. Opt-out becomes the default, turning resistance into exhaustion and compliance into surrender.

Lippmann, Walter (1922): Manufacture of Consent

Context engineering: The art of designing, testing, and optimizing prompts remains underappreciated yet crucial. But it matters less for newer “reasoning” models like o3, which attempt to figure out tasks independently. And suddenly prompt engineering feels very 2023—nowadays it’s all about context: giving chatbots access to the right tools and the right knowledge at the right moment.

Berger, Eleanor (2025): Context Engineering: A Primer

Decoupling: You build systems where you can switch models (or other components) easily to not rely on one vendor and avoid platform lock-in.

Deep tailoring: Not a product you can buy—it’s a thought experiment on weaponizing behavioral science. An AI system maps your entire psychological profile—your core beliefs, moral values, identity markers—then uses that data to predict your behavior and design messages to influence you without detection. Scale this process across millions of users and automate it completely.

Luttrell, Andrew, and Teeny, Jake (2025): The Dangers of AI Personalization

Fakecast: When two AI voices try really hard to mimic your favorite podcast. It started with Google’s NotebookLM. Now it’s everywhere.

Reissmann, Ole (2025): Rise of the Fakecast: The Uncanny Valley of AI-Generated Podcasts

Grounding: LLMs generate text by predicting word patterns, but they don’t know what words actually mean to humans. That’s why they sometimes hallucinate. But when you provide outside information—like documents or websites—their answers can be anchored to real, meaningful data. That’s grounding.

Harnad, Stevan (1990): The symbol grounding problem

Inference: When you input something to an AI model and it responds, that process is called inference. The model answers questions it’s never seen before by inferring from its training data.

Informational logistics: The reduction of journalism into filling content pipelines. News gets fragmented into AI-digestible units, automatically repackaged for different audiences, and fed directly into chatbots and recommendation systems.

Klingebiel, Johannes (2025): Informational logistics

Liquid content: It’s the death of the article. “With generative AI, we’re entering a new era: content is becoming dynamic again, flowing like water to adapt to you—your time, space, and interactions,” says Google’s Matthie Lorrain. Text-to-speech is just a tiny step in that direction.

Hvitved, Sofie (2025): The Anything-to-Everything Future Is Here

Luddites: The Rebellion against the Empire of AI. Named after a British labor movement from 1811-1816 that—with style and swagger—disrupted textile factories. Not opposed to machines per se, they wanted a say in how technology was used to preserve their livelihood, their craft, their community. (The government eventually suppressed the Luddites, requiring 14,000 troops.) The Luddites of today ask who actually gains from AI and why we would want to substitute creativity and excellence with exploitation and slop. The new Luddites are out there, capturing crawler bots, putting traffic cones on robotaxis, building the resistance. Take it from Nemik, a fictional character in the series “Andor”: “The Rebellion is everywhere. And even the smallest act of insurrection pushes our lines forward.”

Merchant, Brian (2023): Blood in the Machine

Magic tokens: Adding “think step by step” in prompts can yields better outputs. Anthropic says that giving the chatbot a role like “data scientist” makes for different results, because a data scientist might see different things in data. Now, models are not using language, but a numerical representation: tokens. 17509, 656, 5983 is “step by step” tokenized for ChatGPT 4o. Magic tokens! But if you tell a model not to think of pink elephants, you might dilute focus and waste context. Beware of “poison tokens.”

Brownlee, Jason (2025): Magical Tokens for LLMs

MCP or Model Context Protocol: The bridge between a chatbot like ChatGPT and another piece of software. Want your chatbot to access your WordPress blog? MCP. Let your chatbot write in a Notion databse? MCP. Let a chatbot delete your whole codebase in a glitch? MCP.

Overemployment: When you secretly balance more than one remote job simultaneously. This trend started in software development during the Covid years—a mixture of hustle culture and strategic deadline management. Bosses are not happy. Those AI efficiency gains? Not for the plebs.

Reasoning models: AI systems like ChatGPT o3, designed to break problems into steps, often displaying an intermediate “thought process” to users. Which appears convincing, but is just text and not a represantation of what’s really happening internally. Despite the hype, current reasoning models are not consistently superior.

Stateless functions: Chatting with a bot won’t make it smarter for you or other users. Models have no memory of previous conversations. To mimic a chat, previous inputs and outputs must be sent to the model each time. Training happens once using massive datasets before deployment. (Your data might still be stored by the company for future training.) To work around this, AI platforms are building memory functions that store personalized information that the model gets in each request or can access in real-time.

Willison, Simon (2024): Training is not the same as chatting: ChatGPT and other LLMs don’t remember everything you say

Slop: Used to describe low quality content made with the help of AI. As in: “I hate this AI-slop.” Was funny in 2023. Can carry classist undertones.
Dangerzone

Sarkar, Advait (2025): AI Could Have Written This: Birth of a Classist Slur in Knowledge Work.

Stochastic parrots: After an influential paper criticising AI hype and describing fallacies of generative AI. Perfect back in 2021. While the critique still stands, newer models and use cases make you seem stuck in the past when using the expression.
Dangerzone

Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Mitchell, Margaret (2021): On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

Super Journalist: A marketing term by Axios CEO Jim VandeHei. He sees no future for “average” journalists who chronicle events. Instead, “Super Journalists” with deep passion for a topic and deep sourcing, knowledge, and credibility establish an authentic human connection, based on trust built over years. If only we had a word for it. (It’s “journalist.” The word is “journalist.”)
Cringe

Vibecoding: Not coding, but telling a chatbot what an app should do. It throws an error? Ask AI to fix it. Andrej Karpathy coined “vibe code” in early February 2025: “fully give in to the vibes, embrace exponentials, and forget that the code even exists.” By month’s end, the New York Times had covered it. Backlash followed: vibecoded apps exposed user data and API keys. What about security, compliance, best practices, testing, maintainability?, the old guard asks. AI-assisted coding lives on, but the vibes are off. (A better term might be “situated software.“)
Dangerzone

Something wrong or missing? Mail me!

Filed under Blog. The previous entry is Rise of the Fakecast: The Uncanny Valley of AI-Generated Podcasts, the next entry is I tried ChatGPT Agent and Perplexity Comet to actually get things done.

Subscribe to THEFUTURE

The media newsletter that won't put you to sleep. Welcome to THEFUTURE.

Am I seeing this?