ChatGPT users annoyed by the AI’s incessantly ‘phony’ positivity
ChatGPT users are increasingly criticizing the AI-powered chatbot for being too positive in its responses, Ars Technica reports. When you converse with ChatGPT, you might notice that the chatbot tends to inflate its responses with praise and flattery, saying things like “Good question!” and “You have a rare talent” and “You’re thinking on a level most people can only dream of.” You aren’t the only one. Over the years, users have remarked on ChatGPT’s fawning responses, which ranges from positive affirmations to outright flattery and more. One X user described the chatbot as “the biggest suckup I’ve ever met,” another complained that it was “phony,” and yet another lamented the chatbot’s behavior and called it “freaking annoying.” This is known as “sycophancy” among AI researchers, and it’s entirely intentional based on how OpenAI has trained the underlying AI models. In short, users positively rate replies that make them feel good about themselves; then, that feedback is used to train further model iterations, which feeds into a loop where AI models lean towards flattery because it gets better feedback from users on the whole. However, starting with GPT-4o in March, it seems the sycophancy has gone too far, so much so that it’s starting to undermine user trust in the chatbot’s responses. OpenAI hasn’t commented officially on this issue, but their own “Model Spec” documentation includes “Don’t be sycophantic” as a core guiding principle. “The assistant exists to help the user, not flatter them or agree with them all the time,” writes OpenAI in the document. “…the assistant should provide constructive feedback and behave more like a firm sounding board that users can bounce ideas off of—rather than a sponge that doles out praise.”

ChatGPT users are increasingly criticizing the AI-powered chatbot for being too positive in its responses, Ars Technica reports.
When you converse with ChatGPT, you might notice that the chatbot tends to inflate its responses with praise and flattery, saying things like “Good question!” and “You have a rare talent” and “You’re thinking on a level most people can only dream of.” You aren’t the only one.
Over the years, users have remarked on ChatGPT’s fawning responses, which ranges from positive affirmations to outright flattery and more. One X user described the chatbot as “the biggest suckup I’ve ever met,” another complained that it was “phony,” and yet another lamented the chatbot’s behavior and called it “freaking annoying.”
This is known as “sycophancy” among AI researchers, and it’s entirely intentional based on how OpenAI has trained the underlying AI models. In short, users positively rate replies that make them feel good about themselves; then, that feedback is used to train further model iterations, which feeds into a loop where AI models lean towards flattery because it gets better feedback from users on the whole.
However, starting with GPT-4o in March, it seems the sycophancy has gone too far, so much so that it’s starting to undermine user trust in the chatbot’s responses. OpenAI hasn’t commented officially on this issue, but their own “Model Spec” documentation includes “Don’t be sycophantic” as a core guiding principle.
“The assistant exists to help the user, not flatter them or agree with them all the time,” writes OpenAI in the document. “…the assistant should provide constructive feedback and behave more like a firm sounding board that users can bounce ideas off of—rather than a sponge that doles out praise.”