On Monday, The New Yorker published a lengthy investigation detailing the days leading up to and following Sam Altman’s brief ousting as OpenAI’s CEO.
Back in late 2023, OpenAI’s board of directors shocked Silicon Valley by firing Sam Altman seemingly out of the blue. Following a five-day media blitz by Altman and his supporters and a public letter demanding his return, Altman came back to the company as CEO. The board members who had orchestrated the coup were ousted and replaced with Altman allies such as economist Larry Summers and former Facebook CTO Bret Taylor, who is currently the chairman of the board at OpenAI.
When Altman was reinstated as CEO, OpenAI employees began referring to the turbulent few days as “the Blip,” in reference to the blip in the Marvel Cinematic Universe when the supervillain Thanos made half the world’s population disappear for five years.
According to the New Yorker report, citing interviews with dozens of people in the know, including Altman himself, the OpenAI executive was ousted because his own board members did not find him trustworthy enough to “have his finger on the button” of artificial superintelligence, a theoretical and highly contested super-powered future AI system that could outperform human intelligence on all fronts. The term is sometimes used interchangeably with artificial general intelligence (AGI), although it describes a step even beyond that.
Following secret memos sent to fellow board members by OpenAI’s then-chief scientist Ilya Sutskever, the board reportedly compiled a roughly seventy-page document evidencing Altman’s “consistent pattern” of lying, including about internal safety protocols.
The report says that Altman’s alleged history of lying extends to a time before OpenAI as well. According to the investigation, senior employees at Altman’s previous startup, a now-defunct location-sharing service called Loopt, asked the board to fire him as CEO due to concerns with his lack of transparency.
The accusations followed him to startup accelerator Y Combinator, which Altman led for five years until he was removed due to mistrust, according to the sources cited in the article. Y Combinator leadership has said that he wasn’t fired but was only asked to choose between the startup accelerator and OpenAI. The late hacktivist and former Reddit co-owner Aaron Swartz, who was in Altman’s cohort when he first joined Y Combinator as an entrepreneur with Loopt, allegedly described him as “a sociopath” who could “never be trusted.”
At OpenAI, Altman was accused of lying to executives and even to government officials. The report details an instance in which Altman told U.S. intelligence officials that China had launched a major AGI development project and asked for government funding to launch a counteroffensive, but then failed to show any evidence when asked.
The report also details instances of Altman allegedly gaslighting Anthropic co-founder and then-OpenAI employee Dario Amodei regarding a provision in the billion-dollar Microsoft deal OpenAI signed in 2019 that would override the altruistic clauses Amodei had included in the charter for the company. The clause in question was about AGI, and posited that if another company found a way to build it safely, then OpenAI would “stop competing with and start assisting this project,” as a non-profit with a safety-first objective. OpenAI has since changed its structure to become a for-profit corporation.
Even some Microsoft senior executives, with whom OpenAI has had a long partnership since the 2019 deal, described Altman as someone who “misrepresented, distorted, renegotiated, reneged on agreements.” One senior executive even apparently said this of Altman: “I think there’s a small but real chance he’s eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.”
Those are alarming words to read about any executive in charge of a company as large and consequential as OpenAI, but they have even more weight considering that OpenAI is the leading company creating a technology that many, including its early employees, have defined as a possible existential threat to humanity.
Under Sam Altman’s leadership, OpenAI’s technology has infiltrated pretty much all aspects of modern life. OpenAI’s AI is used by tens of millions of people around the world for health advice, and by numerous others for everything from automating work across industries to finishing homework for students and even offering murky companionship to some lonely people who seek it. ChatGPT is used throughout the federal government as well, and Altman has also recently sold the technology to the Pentagon.
This is all fueled by Altman’s salesmanship. He has sold the potential and purported realities of ChatGPT to so many people, leading to an unprecedented and potentially fragile dealmaking spree that has garnered so much investment that some experts say it is propping up the entire American economy right now.
The New Yorker report also claims that Altman assured the board that GPT-4 had been approved by a safety panel, which turned out to be a misrepresentation when a board member requested documentation of the approvals. Sutskever claimed in the memos that Altman also downplayed the need for safety approvals in conversation with former OpenAI CTO Mira Murati, citing the company’s general counsel. But when Murati asked the general counsel about it, he said he was “confused where sam got that impression.”
The accusations around ChatGPT’s safety features are particularly damning, considering the fallout of GPT-4o, the iteration of ChatGPT that followed GPT-4. The model’s knack for sycophancy reportedly caused instances of “AI psychosis” in vulnerable users, with some cases ending in fatalities.
Some of Altman’s inconsistencies have been well-documented publicly, too. Time and again, the OpenAI chief has published contradictory statements on things like the merits of putting ads in AI chatbots, the need for AI regulation, or whether ChatGPT’s voice feature unveiled in 2024 was inspired by Scarlett Johansson’s performance in the movie “Her.” Altman was also scrutinized recently over a whopping $100 billion Nvidia deal that just did not materialize as initially announced.
The report also details how the company’s culture vastly changed following Altman’s reinstatement as CEO. Before “the Blip,” the company had approached the concept of AGI cautiously, while after, AGI reportedly became a North Star for the company, with slogans like “feel the AGI” seen on merchandise around its offices. The alleged difference was seen in practice, too, as OpenAI disbanded some key teams focusing on chatbot safety, like the existential AI risk team and the superalignment team, which was co-led by Sutskever.
The report comes as Altman’s leadership is put under a microscope as the company begins preparing for a potential IPO.
According to a recent The Information report, Altman seems to be at odds with executives once again, this time regarding OpenAI’s readiness for an IPO. Altman reportedly wants to go public as soon as the fourth quarter of this year and is committing to spend $600 billion in the next five years despite expectations that OpenAI will burn more than $200 billion before it starts making money. Meanwhile, the report claims that OpenAI CFO Sarah Friar does not believe the company is ready to go public this year at all, due to the risky spending commitments. Unlike Altman, Friar reportedly does not yet believe that OpenAI’s revenue growth can support its financial commitments, nor is she certain that the company will even need to pour that much money into AI servers.









English (US) ·