OpenAI’s internal messaging systems were breached last year, according to reports, with threat actors stealing details about the design of the company’s AI technologies.
While the company kept quiet about the breach, it was serious enough to raise concerns about US national security, according to the New York Times.
The breach apparently took place early last year in an online forum where staff discussed the company’s products and technologies. However, no training data, algorithms, results, or customer data is believed to have been exposed.
The company revealed the breach to staff and the board in April, but not to the public on the grounds that no customer or partner data was stolen, and the hacker appeared to be an individual without government ties.
However, employees including Leopold Aschenbrenner, a former OpenAI technical program manager, expressed concerns about the company’s security stance.
There were, he said, implications for national security, on the basis that the messaging systems could just as easily have been hacked by a nation state such as China.
“While the details of the alleged incident are not yet confirmed by OpenAI, there is a strong possibility that the incident actually took place and is not the only one,” said Dr Ilia Kolochenko, partner and cybersecurity practice lead at Platt Law LLP and CEO of ImmuniWeb.
“The global AI race has become a matter of national security for many countries, therefore, state-backed cybercrime groups and mercenaries are aggressively targeting AI vendors, from talented startups to tech giants like Google or OpenAI.”
Hackers, he said, focus their efforts primarily on the theft of valuable intellectual property including technological research and know-how, large language models (LLMs), sources of training data, and commercial information such as an AI firm’s clients and use of AI across different industries.
“More sophisticated cyber-threat actors may also implant stealthy backdoors to continually control breached AI companies, and to be able to suddenly disrupt or even shut down their operations, similar to the large-scale hacking campaigns targeting critical national infrastructure (CNI) in Western countries recently,” he added.
Earlier this year, OpenAI reported it had terminated accounts linked to five covert influence operations. These campaigns were using the company’s models to manipulate public opinion or influence political outcomes. Two were believed to have originated in Russia, one in China, one in Iran, and one in Israel.
OpenAI also recently established a Safety and Security Committee to handle risk management for its AI projects and operations. The committee is expected to present its findings and recommendations to the board in September.