How to get less hallucinations: “What often is deemed a ‘wrong’ response is often merely a first pass at describing the beliefs out there. And the solution is the same: iterate the process.” (Mike Caulfield, The End(s) of Argument)
Summary
- LLMs may initially give "wrong" responses, but these might be just a first pass, not hallucinations.
- Use "sorting prompts" to push LLMs to iterate, explore evidence, and reach a more nuanced conclusion.
- Developing your own prompts and testing them can help improve LLM-based verification skills.