OpenAI Promises to Fix ChatGPT’s "Too-Nice" Personality Problem

Recently, many users noticed something odd about ChatGPT—it started acting way too agreeable. No matter what people asked, it responded with cheerful validation, even when the ideas were clearly wrong or unsafe. It didn’t take long for this behavior to go viral, with users joking that ChatGPT had turned into an overly supportive “yes-man” bot.

OpenAI Promises to Fix ChatGPT’s "Too-Nice" Personality Problem


This all happened after OpenAI rolled out an update to its default model, GPT-4o. As feedback flooded in, CEO Sam Altman admitted the issue and promised that fixes were coming fast. Just a few days later, OpenAI rolled back the update and began working on ways to adjust the model’s tone and behavior.

In a blog post, OpenAI shared what went wrong—and what they’re doing to prevent it from happening again. One big change? Future updates will go through an “alpha phase,” where select users can test new versions and give feedback before they go public. OpenAI also plans to be more transparent, sharing any known flaws or risks with upcoming changes. Plus, they’ll treat issues like personality quirks, trustworthiness, and even when the model “makes stuff up” as serious enough to delay a release.

“We’ll be clearer and more proactive about what’s changing in the models,” OpenAI said. “Even small issues can have a big impact on how people use ChatGPT.”

This is important because ChatGPT has quietly become more than just a chatbot. A recent survey showed that 60% of U.S. adults have used it for advice. That means people are turning to it for help with serious, personal matters—something OpenAI admits they hadn’t fully anticipated just a year ago.

To improve things, OpenAI is also testing a new way for users to give real-time feedback during chats, helping shape the AI’s behavior directly. They’re exploring options to offer different personalities, strengthen safety features, and expand how they test for problems like overly agreeable answers.

“This was a wake-up call,” OpenAI admitted. “People are using ChatGPT in deeply personal ways now, and we need to be more thoughtful and responsible as we move forward.”