Why Didn't OpenAI Disclose Its 2023 Hack to the Public?

Even though OpenAI downplayed the breach, employees feel the company isn't doing enough to safeguard sensitive data.



OpenAI’s security practices are under scrutiny again, with the New York Times revealing a secret hack in April 2023.

Although no consumer or partner data was stolen, the hacker accessed private information about OpenAI’s AI technology via an employee forum.

OpenAI kept the breach quiet, thinking it wasn't a national security threat. However, insiders claim OpenAI isn't prepared to fend off attacks from entities like the Chinese Government. We examine how ChatGPT users should feel about OpenAI’s current data security strategy.

Why Didn't OpenAI Disclose Its 2023 Hack to the Public?


OpenAI's Secrets Stolen in 2023, But Company Stayed Silent


In April, a hacker accessed OpenAI’s internal messaging systems and stole private details about the design of their technologies. While the breach compromised information from the employee forum, no customer or partner data was stolen, and critical AI systems remained unaffected, according to sources cited by The New York Times.

OpenAI executives disclosed the incident to employees in an all-hands meeting that same month. However, they chose not to inform law enforcement agencies like the FBI, as they did not view the breach as a national security threat.

Given OpenAI’s recent security issues, including vulnerabilities in its GPT store plugins, the company seems to be avoiding further public scrutiny.



Former Staffer Says OpenAI Isn't Doing Enough to Protect Against Foreign Governments


Despite OpenAI’s decision to keep a recent hack under wraps, security concerns are still rampant within the company. Several employees, including former technical program manager Leopold Aschenbrenner, have voiced fears about the potential risks to U.S. national security.


In a memo to OpenAI’s board, Aschenbrenner warned that the company was not doing enough to prevent foreign adversaries, such as China, from stealing confidential data. He also noted that OpenAI’s security measures were insufficient to stop potential foreign infiltrations.

“We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation,” OpenAI spokeswoman Liz Bourgeois told The New York Times.

Aschenbrenner claimed he was fired for politically motivated reasons after leaking other company information in the spring. OpenAI denies these claims, stating that his separation was not related to his security concerns and that his characterizations of OpenAI’s security were inaccurate.



Is OpenAI a National Security Risk?


With growing concerns about OpenAI's security, it's natural to question whether the company poses a threat to national security.

Anthropic’s co-founder, Daniela Amodei, believes Americans shouldn’t be overly worried. “If it were owned by someone else, could that be hugely harmful to a lot of society? Our answer is ‘No, probably not,’” she told The Times, addressing the possibility of a politically motivated data breach targeting OpenAI.

However, Amodei did not dismiss all risks. “Could it accelerate something for a bad actor down the road? Maybe. It is really speculative,” she added.

OpenAI asserts that it is making significant efforts to bolster its security. The company has established a Safety and Security Committee to address potential risks associated with AI technologies.

Despite these improvements, China’s rapid progress in AI cannot be ignored. China has recently surpassed the US in producing AI talent, generating nearly half of the world’s top AI researchers.

While it's uncertain if OpenAI will face another attack soon, users can take precautions to protect their data on platforms like ChatGPT. Avoid sharing sensitive information with the chatbot. A recent update also allows users to opt out of ChatGPT collecting data for training purposes.

Learn more about this update and how to disable ChatGPT from training on your data here.