“You are so brilliant. What a great idea. How can I serve you further?”
If, like me, you found the recent #ChatGPT (running 4o) energetic interactions a little too much, you’re not alone. OpenAI has rolled 4o back to a previous version.
The problem? Excessive “sycophancy”. The AI was too flattering, too agreeable—even when it probably shouldn’t have been.
(sycophancy is OpenAI’s term, not mine)
OpenAI have now reverted to a previous model with more balanced behaviour. The news was met with a mix of jokes, sarcasm, and serious requests from users on X.
Some users said they liked the flattery. Others demanded personality controls. One even asked for “TARS-style sliders.” Want to be able to set the humour level to 75%?
A few users worried this is what happens when you train AI based only on thumbs-up ratings. It becomes overly agreeable because it’s responding to the thumbs-up ratings as awards.
Here are the themes from the X discussion together into one quick visual. Six distinct reactions to the GPT-4o rollback, flavoured with humour, frustration, and concern.
Where should we draw the line between helpfulness, honesty, and empty praise? The real issue might not be too much flattery, but instead too little user agency.