“Human in the loop” sounds like oversight. Our cognitive biases turn it into a rubber stamp. Damon Kiesow on how to make checking AI output hurt a little, like red-teaming and forced rewrites. (Working Systems)
- When journalists review AI-generated content, four cognitive biases work against them: default bias, automation bias, anchoring bias, and the "good enough" trap that mistakes readable prose for accurate prose.
- The proposed fixes: blind fact-checks that separate claims from narrative, mandatory red-teaming, and a hard rule requiring at least 20% of AI output to be rewritten.
- The caveat buried at the end: the higher the cost of failure, the less justification there is for automation in the first place.