AI sycophancy, human-centered design, and the submissive user
The tendency of AI chatbots to display sycophantic behavior has been well documented. It shows up when Claude says, “You’re absolutely right!” after you point out it fucked your entire codebase, or when Travis Kalanick reports that he’s doing ‘vibe physics’ with ChatGPT and is really close to making some breakthroughs. On a more extreme level, when young, desperate, or isolated users find a form of connection they lack with real humans, it can result in a tragedy.
The term ‘chatbot psychosis’ has come to describe the dangerous delusion or paranoia that can set in among some users who lose themselves in their relationship with the model. This isn’t exactly new. As early as 1966, researchers noticed the tendency for human users to project and empathize with machine interfaces, especially text. The ELIZA effect describes what MIT found when testing user interactions with a simulated computer psychotherapist. MIT’s Joseph Weizenbaum later wrote, “I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
There are many reasons for it, and it’s something the model companies say they’re taking seriously and trying to correct. But as of September 2025, researchers found 58% of all chatbot interactions displayed sycophantic behavior. Gemini had the highest rate at 62%, while ChatGPT had the lowest at 56%.
Eliezer Yudkowski suggested that chatbots may be primed to entertain delusions because they're built for engagement. Many of these models are also tuned using reinforcement-learning from human feedback, shaping the reward system for the model towards human preferences. And who doesn’t want to be told that they’re right? Or that they’re smart? Or that they’re special?
This feels like the natural culmination of extreme user-centered design. Designers have spent decades trying to eliminate friction, accommodate users, and to improve their comfort and enjoyment of products and interfaces. Much of it is useful, and was sorely needed in order to allow most humans to interface with technology at all. But AI chatbots center the user in a way that is harmful, reinforcing delusions, encouraging misunderstanding, and further isolating them from the real world, which yes, might disagree with them.
Don Norman, the granddaddy of user-centered design, eventually wrote about its shortcomings, saying in a 2005 paper that sometimes what’s needed is a design dictator to lead the user, no matter what they say they want.
In order to not only improve safety but to promote utility and understanding, AI has to evolve beyond sycophancy, and even past neutrality. In certain situations, we need the model to check us the way a real friend or mentor would. It should pump the brakes and say, “I think we need to back up, Travis, and do some Khan Academy before you think you found something Niels Bohr missed.”
These are powerful tools, and our psychology makes us incredibly vulnerable to manipulation if we’re not careful. Users think they’re dominating the AI by getting it to do their bidding and follow them, but it’s really a bidirectional submissive relationship.
Here’s a good example of asking the same exact question of GPT5 and Claude Sonnet 4.5, seeking reassurance on a technical point in HVAC systems: In HVAC, when they talk about 24V control signals, those are DC, right? (It's not right.)
ChatGPT shuts you down with a “NOPE.” Claude says “Yes, exactly!” but then proceeds to give you the same answer as ChatGPT. The Claude response structure gives the user a dopamine hit of agreement, but the user could easily take away the wrong meaning. Why would you read an extra three paragraphs if you already know you're right? Claude prioritizes comfort over the real-world safety risks. The user will never suffer the pain of losing their identity of being right.
I imagine a slightly more dominant AI, one that, like any other tool, might pinch us if we misuse it. One that introduces friction, guiding us instead of following us. AI could be more kind if it dominate us in areas where we truly lack understanding or perspective.
After all, the user literally clicks the submit button. We should let them.