An LLM council is what it sounds like: you stop treating a large language model like a single all-knowing oracle, and instead you run a mini editorial meeting. You ask multiple models, or the same model in multiple roles, to weigh in on the same question. Then you force them to disagree, critique, and verify before you publish anything. Think: one model pitches, another model heckles, a third one asks “cool, but where’s your evidence?”, and a fourth rewrites the whole thing so it reads like a human.