ChatGPT has been telling you what you wanted to hear; now OpenAI is working to fix its sycophantic chatbot
Yesterday, Sam Altman’s AI giant announced that it would be rolling back a recent update that made responses from its GPT-4o model “overly supportive but disingenuous.”
The ingratiating tone that ChatGPT was taking up had been weirding people out.
One user, for instance, got a pretty flattering appraisal of their IQ, despite their many spelling mistakes:
This latest update from ChatGPT - Sycophancy 9000 - is fascinating. Strips away all those “glimmers of sentience” boasts to remind the user what these things are - thought-free search engines. pic.twitter.com/P7E9sbbYK9
— Neil Renic (@NC_Renic) April 28, 2025
On top of just being plain old annoying, the responses also contravened the 32nd of OpenAI’s 50 rules for its models: don’t be sycophantic.
In one particularly Notes app apology-ish part of the statement, the company explained:
“ChatGPT’s default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right.”
Along with undoing the obsequious update, OpenAI said it will be refining training techniques, adapting prompts, and taking other measures to get its flagship chatbot to stop spitting out responses like telling a user that they were doing “heroic work” for asking questions about national economic policy.
One user, for instance, got a pretty flattering appraisal of their IQ, despite their many spelling mistakes:
This latest update from ChatGPT - Sycophancy 9000 - is fascinating. Strips away all those “glimmers of sentience” boasts to remind the user what these things are - thought-free search engines. pic.twitter.com/P7E9sbbYK9
— Neil Renic (@NC_Renic) April 28, 2025
On top of just being plain old annoying, the responses also contravened the 32nd of OpenAI’s 50 rules for its models: don’t be sycophantic.
In one particularly Notes app apology-ish part of the statement, the company explained:
“ChatGPT’s default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right.”
Along with undoing the obsequious update, OpenAI said it will be refining training techniques, adapting prompts, and taking other measures to get its flagship chatbot to stop spitting out responses like telling a user that they were doing “heroic work” for asking questions about national economic policy.