The Real Story Behind ChatGPT: How It Was Built by OpenAI’s Team

When OpenAI introduced ChatGPT in late November 2022, it did so quietly, without any major expectations. The San Francisco-based AI company saw it as a “research preview,” not anticipating the viral success it would quickly achieve. No one at OpenAI was prepared for the overwhelming response, and the team has been working to keep up and make the most of its unexpected popularity ever since.

The Real Story Behind ChatGPT: How It Was Built by OpenAI’s Team

According to Sandhini Agarwal, who works on policy at OpenAI, the initial idea was to release a polished version of a two-year-old technology, while also gathering feedback to improve it. “We didn’t want to oversell it as a groundbreaking innovation,” says Liam Fedus, a scientist involved in developing ChatGPT. The team wanted to fix some of the system's flaws by learning from how the public interacted with it.

To uncover the full story of ChatGPT’s development, I spoke with four key members of the OpenAI team. In addition to Agarwal and Fedus, I talked to John Schulman, one of OpenAI's co-founders, and Jan Leike, who leads the alignment team, focusing on making AI systems more user-friendly and safe. What became clear is that the team is still somewhat surprised by the success of their creation but sees it as a great opportunity to improve the technology further, based on how people are using it.


The Journey So Far

Since its release, ChatGPT has already undergone several updates. One of the main techniques OpenAI uses is called "adversarial training." This involves pitting chatbots against each other to test whether ChatGPT can be tricked into giving inappropriate responses. The goal is to help it learn to avoid such situations in the future.

Beyond improving the model, OpenAI has also formed major partnerships, including a multibillion-dollar deal with Microsoft. Another collaboration with Bain, a global consulting firm, aims to use OpenAI’s technology in marketing campaigns for big companies like Coca-Cola. Meanwhile, ChatGPT’s popularity has sparked a global interest in large language models, with companies and investors racing to develop similar technologies.

But where did ChatGPT come from, and how did OpenAI prepare it for launch? Here’s what the team had to say.

The Real Story Behind ChatGPT: How It Was Built by OpenAI’s Team

A Surprising Reception

Jan Leike admits that the response to ChatGPT has been overwhelming. “We’ve been surprised by how quickly it gained popularity,” he says. John Schulman also shares that he was constantly checking Twitter after the release and was shocked to see how many people were sharing ChatGPT screenshots. “I expected it to catch on, but not this fast,” he explains.

Sandhini Agarwal adds that while the team had worked on the technology for a long time, they were taken aback by how much it resonated with people outside of their bubble. “We didn’t expect such a huge reaction,” she says.


Not a Brand-New Technology

Interestingly, most of the technology behind ChatGPT isn't new. The model is built on GPT-3.5, an advanced version of GPT-3, which was launched in 2020. GPT-3.5 itself had already been available for months through OpenAI's API, which allowed developers to use the technology in their own applications. However, none of the previous versions had been promoted to the general public like ChatGPT.

According to Fedus, ChatGPT is just a fine-tuned version of GPT-3.5, but with additional conversational training. While the raw capabilities of the models may not differ much, ChatGPT’s accessibility through a user-friendly chat interface has made a big difference. “It’s more aligned with what people want,” says Leike. The conversational data used during its development has made it more helpful and responsive to user needs.


How ChatGPT Was Trained

One of the key techniques used to develop ChatGPT is called reinforcement learning from human feedback (RLHF). This method involves taking a large language model, like GPT-3.5, and teaching it to provide better responses by using feedback from real human users. The team trained ChatGPT to be helpful, truthful, and non-offensive. They also made sure it would act responsibly by refusing inappropriate requests and clarifying that it’s an AI system.

“We had human raters rank the responses of the model,” explains Leike. "They preferred responses that were accurate, helpful, and didn’t pretend to be something the model isn’t.” The phrase “As a language model trained by OpenAI…” emerged naturally as a result of this training process, as it ranked highly with human testers.


A Careful Launch

When preparing to launch ChatGPT, OpenAI didn’t treat it as a completely new or risky model. Since GPT-3.5 had already been in use, they felt confident it was safe enough. However, they still conducted additional tests, involving both internal teams and external groups, to try and identify any potential issues. Even though some unwanted outputs were detected, they were considered similar to those produced by GPT-3.5.

“You can’t wait until a system is perfect to release it,” Schulman points out. The team had been beta-testing earlier versions of the model, and the feedback had been mostly positive. Their main concern was the factual accuracy of the model, as it sometimes tends to generate incorrect information. However, after confirming that ChatGPT was more factual than previous models, they decided to move forward with the launch.

The Real Story Behind ChatGPT: How It Was Built by OpenAI’s Team

Challenges and Improvements

Since ChatGPT’s release, OpenAI has closely monitored how people are using it. They've been quick to address some of the most problematic examples of what ChatGPT can produce, like inappropriate content or harmful requests. The team has been actively working to fix these issues and add them to the model’s training data to prevent similar occurrences in the future.

Agarwal admits that ChatGPT’s viral success has brought certain issues to the forefront. “We always knew about some of these problems, like bias, but the scale of ChatGPT’s popularity has made them more urgent,” she says. Fedus adds that while it’s exciting to see how creative people can get with ChatGPT, there’s still a lot of work to do in improving the model’s performance and addressing its limitations.

One area they’re focusing on is preventing “jailbreaking”—where users trick the model into giving unwanted responses. OpenAI is using adversarial training to make it harder for users to manipulate the system, but Leike admits that it’s difficult to predict every possible problem that might arise.


The Road Ahead

In January 2023, Microsoft introduced Bing Chat, which many believe is powered by a newer version of OpenAI’s technology. The use of AI by tech giants like Microsoft and Google has increased the stakes for OpenAI. Agarwal explains that the challenge now is to build models that work well in a variety of contexts, whether for casual conversations or more critical tasks like search engines.


Schulman acknowledges that OpenAI could have made better decisions when training the model, especially regarding sensitive topics. Leike adds that while ChatGPT has come a long way, there’s still much to be done. “We have to be clear about the limitations of the technology,” he says. “This is still just the beginning.”