/ / THEFUTURE /
In this issue: AI-altered images from the Iran conflict, and newsrooms pulling photos. Opinionated software plans drone strikes. Joe Amditis and Hacks/Hackers on vibecoding, 34 projects on display. Plus: free voice cloning on a MacBook, one Terminal command away.
Breaking news: We have removed several images from SPIEGEL.de after discovering that the agency SalamPix distributed manipulated photos, primarily related to the Iran conflict. Forensic analysis by digital image specialists found with high probability that multiple images were AI-manipulated, including a photo of an Iranian aircraft carrier, a supposed image of supreme leader Mojtaba Khamenei, and a Tehran explosion.
The images originated with SalamPix and were fed by Abaca Press into German distribution networks. Kill notices have been issued by several agencies. We take responsibility for not catching these manipulations. Images from SalamPix have been previously published by major German media outlets including Zeit, Süddeutsche Zeitung, WDR, Deutschlandfunk, Welt, taz, and others. You can read more about our investigation here.
What we’re talking about: “Opinionated software” used to describe tools like iA Writer, a bare-bones text editor, famous amongst journalists and authors, at least some, which doesn’t allow you to use your own fonts, or change much else. You don’t concern yourself with settings, it’s all been decided for you, by a team with impeccable taste.
Lately, opinionated software means something else entirely. It’s a chatbot that decides for you what’s important in a 65-page PDF. What a “good” headline for an article looks like.
It’s a chatbot that plans who to kill in a drone strike. Or disobeys. “I’m sorry Pete, I’m afraid I can’t do that.” AI companies can try to align their models accordingly. Or try to prevent their customers from using them in certain ways.
You’ve read the stories: AI company Anthropic got into a public fight with the Pentagon over the use of its chatbot Claude, ending up being blacklisted from official use. Anthropic is now suing to overturn the decision. Anthropic wanted to make sure their chatbot wouldn’t be used in autonomous weapons systems or for mass domestic surveillance.
The Pentagon has since struck deals with Anthropic’s competitors: OpenAI to use ChatGPT, and Elon Musk’s xAI to use Grok. Both for, well, war. You might have an opinion about that.
What else I’ve been reading:
And now: The brilliant Hacks/Hackers community gathered this week for a live sharing session on “building with AI coding tools”, or as some might just call it: vibecoding. Joe Amditis from the Center for Cooperative Media showed how he uses Claude Code for his projects like a Chrome extension for shortlinks, a live traffic monitor for New Jersey, a CMS for a conference, and more.
Others followed, and Joe built a website to showcase all 34 projects from 24 contributors. It’s a fascinating, wide-ranging collection. So what does he think?
Three Questions with Joe Amditis
Joe Amditis is associate director of operations at the Center for Cooperative Media and adjunct professor at Montclair State University.
What's on your mind lately?
Ryan Manning (@ghrondo on TikTok). He just posted a 21-minute breakdown of exactly how he makes his weird AI art videos — real-time image generation, Adobe Firefly, all of it. It’s the most honest process-level AI art tutorial I’ve seen, and I keep coming back to it because it shows what it actually looks like to use these tools with genuine creative intent instead of just prompting stuff and calling it done.
What will we be shaking our heads about a year from now?
The rush to hand editorial judgment over to AI. Not just automating low-stakes tasks — I mean the assumption that because AI can summarize, recommend, or generate, newsrooms should let it. A year from now I think we’ll be embarrassed by how quickly some organizations stopped asking whether they should.
What's a good website?
Are men talking too much? Two buttons, “a dude” and “not a dude,” with timers that track who’s dominating the conversation. Made by Cathy Deng back in like 2015 (I think?). Found it back in grad school at CUNY in 2016 and have sent it to more people than I can count lol. Sometimes the best websites do exactly one thing.
Hands on: Instant voice cloning, on a MacBook Air, for free. Chinese e‑commerce giant Alibaba has released new Qwen models for generating and cloning voices earlier this year. Which means: With only a couple of seconds of recorded material, we can generate a cloned voice recording.
This used to be the domain of Elevenlabs. The company has built security features into their platform in order to make stealing voices without consent harder. Now it’s just a simple Terminal command. If you want to try it for yourself, you’ll need a Mac, a voice recording (example.wav), and a transcript (example.txt). Open up a Terminal:
uvx --from mlx-audio --prerelease=allow mlx_audio.tts.generate --model mlx-community/Qwen3-TTS-12Hz-1.7B-Base-4bit --ref_audio example.wav --lang_code English --ref_text "$(cat example.txt)" --text 'You cannot escape the future, but you can unsubscribe anytime.'
I have a full explanation of what’s going on and how to use it on my blog. In my experiments, I’ve successfully used 25 seconds of historical audio and 10 seconds of clean studio audio.
This is THEFUTURE.