
Monday, The New Yorker Sam published a lengthy investigation detailing the days leading up to and following Altman’s brief ouster as CEO of OpenAI.
In late 2023, OpenAI’s board shocked Silicon Valley Sam fires Altman as it appears. After a five-day media blitz and open letter from Altman and his supporters demanding his return, Altman returned to the company as CEO. Board members who orchestrated the coup were fired and replaced by Altman allies such as economist Larry Summers and former Facebook CTO Bret Taylor, who is now chairman of the board at OpenAI.
When Altman was reinstated as CEO, OpenAI employees were in for a tumultuous few days “Blip,” referring to the strike in the Marvel Cinematic Universe when supervillain Thanos wiped out half the world’s population in five years.
According to a New Yorker report, citing interviews with dozens of people, including Altman himself, he was ousted as OpenAI’s CEO because board members didn’t think he was trustworthy enough to “put his finger on the button.” artificial super intelligencea theoretical and much-debated super-powerful future artificial intelligence system could surpass human intelligence on all fronts. The term is sometimes used interchangeably with artificial general intelligence (AGI), although it describes a step beyond even that.
Following secret memos sent to fellow board members by OpenAI’s then chief scientist Ilya Sutskeverthe board reportedly compiled a nearly seventy-page document proving Altman’s “consistent pattern” of lying, including internal security protocols.
Altman’s history of alleged lies stretches back to the pre-OpenAI era, the report said. At Altman’s previous startup, Loopt, a now-defunct location-sharing service, the board asked to remove him as CEO over concerns about a lack of transparency, according to the investigation.
According to sources cited in the article, the allegations followed him to the startup accelerator Y Combinator, which Altman led for five years until he was ousted due to lack of confidence. Y Combinator management said he was not fired, but asked to choose between the startup accelerator and OpenAI. Late hacktivist and former Reddit contributor Aaron SwartzLoopt, who was in Altman’s cohort when he first joined Y Combinator as an entrepreneur, allegedly described him as a “sociopath” who could “never be trusted.”
At OpenAI, Altman was accused of lying to executives and even government officials. The report details an example of how Altman told US intelligence officials that China had begun a major AGI development project and sought government funding to launch a counterattack, but provided no evidence when asked.
The report also details Altman’s gaslighting of Anthropic co-founder and then-OpenAI employee Dario Amodei over a billion-dollar provision. Microsoft agreement OpenAI will repeal the altruistic clause that Amodei signed in 2019 and included in its company charter. The clause in question was about AGI, and if another company found a way to build it securely, OpenAI, as a security-first non-profit organization, suggested it would “stop competing with this project and start helping.” OpenAI has since changed its structure to a commercial corporation.
Even some senior Microsoft executives, with whom OpenAI has long worked since the 2019 deal, described Altman as someone who “misrepresents, distorts, renegotiates, reneges on deals.” One senior executive even apparently said of Altman: “I think there’s a small but real chance he’ll be remembered as a Bernie Madoff or Sam Bankman-Fried level fraudster.”
Those are alarming words to read about any executive in charge of a company as large and consequential as OpenAI, but they carry even more weight when you consider that OpenAI is a leading company creating a technology that many, including its early employees, identified as a possible existential threat to humanity.
Under the leadership of Sam Altman, OpenAI technology has penetrated almost all aspects of modern life. OpenAI’s artificial intelligence is used by tens of millions of people around the world health advice, and by many others for everything from automating jobs in industry to completing homework for students and even offering dark companionship to some lonely people looking for it. ChatGPT is also used by the federal government, and Altman also recently sold the technology Pentagon.
All this stems from Altman’s salesmanship. He sold the potential and alleged realities of ChatGPT to so many people that it led to an unprecedented and potentially fragile deal frenzy that raised so much capital that some experts say it now supports the entire American economy.
The New Yorker report also alleges that Altman assured the board that GPT-4 had been approved by the safety panel, a misrepresentation when a board member requested documentation of the approvals. Sutskever claimed in the memo that Altman also downplayed the need for security clearances in a conversation with former OpenAI CTO Mira Murati, citing the company’s general counsel. But when Murati asked the general counsel about it, he said he was “confused about where Sam got this impression.”
The accusations about ChatGPT’s security features are especially damning considering Effect of GPT-4oChatGPT iteration after GPT-4. It has been reported that the model’s flattering ability has caused cases of “AI psychosis” in susceptible users, and some cases have resulted in this. deaths.
Some of Altman’s inconsistencies are also well-documented publicly. The OpenAI chief has repeatedly published contradictory statements about things like its nature Placing ads on AI chatbotsneed AI adjustmentor was ChatGPT’s voice feature introduced in 2024 Inspired by Scarlett Johansson’s performance In the movie “Her”. Altman was also recently scrutinized over the $100 billion Nvidia deal did not materialize as originally announced.
The report also details how the company’s culture has changed since Altman was reinstated as CEO. Before The Blip, the company was wary of the concept of AGI, and then AGI became the company’s North Star, with slogans like “Feel the AGI” emblazoned on merchandise around its offices. The claimed difference was also seen in practice, as OpenAI knocked out some key teams focused on chatbot security. existential AI risk group and superalignment teamManaged by Sutskever.
The report comes as Altman’s leadership is under the microscope as the company begins to gear up potential IPO.
According to a recent report Information reportAltman is once again at odds with executives, this time over OpenAI’s IPO readiness. Altman reportedly wants to go public in the fourth quarter of this year and has committed to spending $600 billion over the next five years, despite expectations that OpenAI will burn through more than $200 billion before it starts making money. Meanwhile, the report claims that OpenAI CFO Sarah Friar doesn’t believe the company is ready to go public this year due to risky spending commitments. Unlike Altman, Friar reportedly doesn’t yet believe OpenAI’s revenue growth will support its financial commitments, nor is he convinced the company will need to pour so much money into AI servers.




