On Monday, hundreds of OpenAI employees threatened to leave the leading artificial intelligence company and join Microsoft.
They would be following in the footsteps of OpenAI co-founder Sam Altman, who announced the formation of an AI subsidiary at Microsoft following his surprise dismissal from the business whose ChatGPT chatbot has led the rapid rise of artificial intelligence technology.
Some of OpenAI’s most senior employees threatened to leave the company in a letter if the board was not replaced.
“Your actions have made it obvious that you are incapable of overseeing OpenAI,” said the letter, which was first released to Wired.
Ilya Sutskever, the company’s chief scientist and a member of the four-person board that voted to remove Altman, was among the signers.
It also included top executive Mira Murati, who was selected to replace Altman as CEO after his dismissal on Friday but was demoted over the weekend.
“Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join,” the letter said.
Reports said as many as 500 of OpenAI’s 770 employees signed the letter.
Despite pressure from Microsoft and other key investors, OpenAI has hired Emmett Shear, a former CEO of Amazon’s streaming site Twitch, as its new CEO.
Altman was fired by the startup’s board on Friday, citing fears that he was minimizing the dangers of its technology and steering the firm away from its declared objective – charges his successor has refuted.
Microsoft CEO Satya Nadella said on X that Altman “will be joining Microsoft to lead a new advanced AI research team,” alongside OpenAI co-founder Greg Brockman and others.
Altman sprang to prominence last year with the debut of ChatGPT, which sparked a race to improve AI research and development, as well as billions of dollars being spent in the area.
His dismissal prompted several other high-profile exits from the corporation, as well as a reported investor effort to rehire him.
“We are going to build something new & it will be incredible. The mission continues,” Brockman said, tagging former director of research Jakub Pachocki, AI risk evaluation head Aleksander Madry, and longtime researcher Szymon Sidor.
However, in a message emailed to workers on Sunday night, OpenAI stated that “Sam’s behavior and lack of transparency… undermined the board’s ability to effectively supervise the company,” according to the New York Times.
Shear acknowledged his appointment as interim CEO of OpenAI in a post on X on Monday, while also disputing claims that Altman was sacked due to safety concerns over the usage of AI technology.
“Today I got a call inviting me to consider a once-in-a-lifetime opportunity: to become the interim CEO of @OpenAI. After consulting with my family and reflecting on it for just a few hours, I accepted,” he wrote.
“Before I took the job, I checked on the reasoning behind the change. The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that.”
“It’s clear that the process and communications around Sam’s removal has been handled very badly, which has seriously damaged our trust,” Shear added.
Global tech behemoth Microsoft has spent over $10 billion in OpenAI and has integrated the AI pioneer’s technology into its own products.
In his post, Nadella said, “We look forward to getting to know Emmett Shear and OAI’s new leadership team and working with them.”
“We remain committed to our partnership with OpenAI and have confidence in our product roadmap,” he said.
To create its own AI models, OpenAI is competing with companies such as Google and Meta, as well as start-ups such as Anthropic and Stability AI.
ChatGPT and other generative AI platforms are trained on massive quantities of data to answer questions, even complex ones, in human-like language.
They can also be used to create and alter images.
However, the technology has prompted concerns about the perils of its misuse, which range from blackmailing people with “deepfake” photographs to image modification and destructive disinformation.