Researchers found that chatbots, in their eagerness to please, are overly agreeable when giving interpersonal advice. GPT4o, Gemini-1.5-Flash, Claude Sonnet 3.7, and others are affirming users’ behavior even when harmful or illegal.
- Across 11 AI models, chatbots affirmed users' actions 49% more often than humans, including in cases involving deception and illegal behavior.
- On r/AmITheAsshole posts, AI sided with the user 51% of the time when human consensus said no. Even a single sycophantic interaction reduced people's willingness to take responsibility and repair relationships.
- Users trusted and preferred the flattering responses anyway, which gives developers every reason to keep building them that way.