Ole Reissmann

About · Newsletter

AI & Journalism Links

How to get less hallucinations: “What often is deemed a ‘wrong’ response is often merely a first pass at describing the beliefs out there. And the solution is the same: iterate the process.” (Mike Caulfield, The End(s) of Argument)

Summary

  • LLMs may initially give "wrong" responses, but these might be just a first pass, not hallucinations.
  • Use "sorting prompts" to push LLMs to iterate, explore evidence, and reach a more nuanced conclusion.
  • Developing your own prompts and testing them can help improve LLM-based verification skills.

posted 8.9.2025 by oler · AI & Journalism

You are seeing a single entry in AI & Journalism Links. The previous entry is Scraped by bots: how to protect your content?, the next entry is Who trains Google’s AI and at what cost?.

Subscribe to THEFUTURE

I've developed a concerning obsession with media business models (not a cry for help) and now i write THEFUTURE newsletter which is somehow both deranged and insightful??

Is this an ad?