Over the past year, Sam Altman has brought OpenAI to the technology industry’s adult table. Thanks to its hugely popular ChatGPIT chatbot, the San Francisco start-up was at the center of an artificial intelligence boom, and Mr. Altman, OpenAI’s chief executive, had become one of the most recognizable people in tech.
But that success increased tensions within the company. Ilya Sutskever, a respected AI researcher who co-founded OpenAI with Mr. Altman and nine others, was concerned that OpenAI’s technology could be dangerous and that Mr. Altman was not paying enough attention to that risk, This is his thinking according to three familiar people. Mr. Sutskever, a member of the company’s board of directors, also objected to his diminished role within the company, according to the two people.
The conflict between rapid development and AI security came into focus on Friday afternoon, when Mr Altman was ousted from his job by four of OpenAI’s six board members, led by Mr Sutskever. The move shocked OpenAI employees and the rest of the tech industry, including Microsoft, which has invested $13 billion in the company. Some industry insiders were saying that the split was as significant as the one that forced Steve Jobs out of Apple in 1985.
The ouster of Mr. Altman, 38, drew attention to a long-standing rift in the AI community between those who believe AI is the biggest business opportunity in a generation and others who worry that AI is coming too fast. It may be dangerous to proceed further. And the takedown showed how a philosophical movement dedicated to the fear of AI had become an inevitable part of tech culture.
Since ChatGPIT was released about a year ago, artificial intelligence has captured the public’s imagination, with hopes that it could be used for important tasks like drug research or helping teach children. But some AI scientists and political leaders are concerned about its risks, such as jobs going out of existence or autonomous warfare that grows beyond human control.
The fear that AI researchers were creating something dangerous has been a fundamental part of OpenAI’s culture. Its founders believed that because they understood those risks, they were the right people to build it.
OpenAI’s board did not give any specific reason for ousting Mr. Ataman, except to say in a blog post that it did not believe he was communicating honestly with them. According to a message seen by The New York Times, OpenAI employees were told Saturday morning that their removal had nothing to do with “malfeasance or anything related to our financial, business, security or security/privacy practices.”
Greg Brockman, another co-founder and chairman of the company, stepped down Friday night in protest. The research director of OpenAI did the same. By Saturday morning, the company was in chaos and its roughly 700 employees were struggling to understand why the board took the step, according to half a dozen current and former employees.
“I’m sure you are all feeling confusion, sadness, and probably some fear,” Brad Lightcap, chief operating officer of OpenAI, said in a memo to OpenAI employees. “We are completely focused on handling this, moving toward resolution and clarity, and getting back to work.”
Mr Altman was asked to join a board meeting via video in San Francisco on Friday afternoon. Mr. Sutskever, 37, read a script there that was very similar to a blog post published by the company minutes later, according to a person familiar with the matter. The Post said Mr. Altman “was not consistently forthright in his communications with the board, hindering its ability to carry out its responsibilities.”
But in the hours that followed, OpenAI employees and others focused not only on what Mr. Altman might have done, but on the way the San Francisco start-up was structured and the dangers of AI inherent in the company’s work. Also focused on extreme ideas. It was built in 2015.
Mr. Sutskever and Mr. Altman could not be reached for comment Saturday.
In recent weeks, Jakub Pachocki, who helped oversee GPT-4, the technology at the heart of ChatGPT, was promoted to director of research at the company. After previously holding a position below Mr. Sutskever, he was promoted to a position alongside Mr. Sutskever, according to two people familiar with the matter.
Mr. Pachocki left the company late Friday, shortly after Mr. Brockman, the people said. Earlier in the day, OpenAI said Mr. Brockman had been removed as chairman of the board and would report to new interim chief executive Mira Muratti. Other colleagues of Mr. Altman – including two senior researchers, Szymon Sidor and Alexander Madry – have also left the company.
Mr. Brockman said in a post on x, formerly of Twitter, that even though he was chairman of the board, he was not part of the board meeting where Mr. Altman was ousted. That left Mr. Sutskever and three other board members: Adam D’Angelo, chief executive of question-and-answer site Quora; Tasha McCauley, associate senior management scientist at the RAND Corporation; and Helen Toner, director of strategy and fundamental research grants at Georgetown University’s Center for Security and Emerging Technology.
He could not be reached for comment on Saturday.
Ms. McCauley and Ms. Toner belong to the rationalist and effective altruism movements, a community deeply concerned that AI could one day destroy humanity. Today’s AI technology cannot destroy humanity. But this community believes that as technology becomes more powerful, these threats will arise.
In 2021, a researcher named Dario Amodei, who also has ties to this community, and about 15 other OpenAI employees left the company to form a new AI company called Anthropic.
Mr. Sutskever increasingly identified with those beliefs. Born in the Soviet Union, he spent his formative years in Israel and moved to Canada in his teens. As an undergraduate at the University of Toronto, he helped make a breakthrough in an AI technology called neural networks.
In 2015, Mr. Sutskever left his job at Google and helped found OpenAI with Mr. Altman, Mr. Brockman and Elon Musk, Tesla’s chief executive. He created the lab as a non-profit, saying that unlike Google and other companies, it would not be driven by commercial incentives. He vowed to create a machine called artificial general intelligence, or AGI, that could do anything the brain can do.
Mr Altman turned OpenAI into a profitable company in 2018 and negotiated a $1 billion investment from Microsoft. Such large sums of money are necessary to build technologies like GPT-4, which was released earlier this year. Since its initial investment, Microsoft has invested another $12 billion in the company.
The company was still governed by a non-profit board. Investors like Microsoft can benefit from OpenAI, but their profits are limited. Any money over the limit is sent back to the nonprofit.
As soon as he saw the power of GPT-4, Mr. Sutskever helped create a new super alignment team inside the company that would explore ways to ensure that future versions of the technology would not cause harm.
Mr. Altman was open to those concerns, but he also wanted OpenAI to stay ahead of its much larger competitors. In late September, Mr. Altman flew to the Middle East to meet with investors, according to two people familiar with the matter. He sought $1 billion in funding from Masayoshi Son-led Japanese technology investor SoftBank for a potential OpenAI venture that would build a hardware device to run AI technologies like ChatGPIT.
OpenAI is also in talks for “tender offer” funding that will allow employees to redeem shares in the company. That deal would value OpenAI at more than $80 billion, nearly triple its value six months ago.
But it appears that the company’s success has raised concerns that something could go wrong with AI
“It doesn’t seem impossible at all that we will have computers – data centers – that will be much smarter than people,” Mr. Sutskever said on a podcast on Nov. 2. “What would such AI do? I don’t know.”
Kevin Roose And Trip Mickle Contributed to the reporting.