The latest we know about Sam Altman’s potential return to OpenAI after a chaotic weekend of boardroom drama

 

By Chris Stokel-Walker

Update: Shortly after this story was published, it was reported that the OpenAI board had hired former Twitch CEO Emmett Shear, deciding not to reinstate Sam Altman.

If the unexpected firing of Sam Altman as CEO of OpenAI on November 17 was—as has been reported—a battle highlighting the schism within the broader AI field between those pursuing aggressive development of the technology and the more cautious, existential threat-fearing doom-mongers, then his potential return to the same position would be a firm win for those in the former camp. But the move, which had been in the works all weekend, remains in flux.

The latest we know is that Altman is working on potentially rejoining the company after the board fired him, blaming his lack of transparency—a decision that triggered the resignation of fellow cofounder, president, and board chair Greg Brockman, who hit back with a pointed tweet outlining the timeline behind Altman’s departure.

Reporting from The New York Times as of the night of Sunday, November 19, pointed to busy negotiations over the former CEO’s return, the installation of new board members, the potential departure of the current board, and a shift in the company’s complex corporate structure. At the same time, Bloomberg reported that while interim CEO Mira Murati was working to bring back Altman and Brockman, the current board was simultaneously looking at new CEO candidates to take over the company, a move at odds with the reported desires of many of OpenAI’s investors.

Over the weekend, dozens of OpenAI staff appeared to rally behind Altman, visiting his California home on Saturday to offer their support. That support was replicated online in a show of strength, mass-quote tweeting a message from Altman that might have given those who fired him second thoughts. Repairing the massive fissure the last few days has caused—while quelling the underlying tension inherent in OpenAI’s structure and goals—will be a tricky task, should he end up returning.

The consensus has been that Altman’s firing on Friday stemmed from a disparity between the founding principles of OpenAI—to develop artificial general intelligence (AGI) for the benefit of humanity, without the need to turn a profit—and those who recognize the power that the company has to capitalize on its GPT large language model, according to reports. OpenAI’s chief operating officer Brad Lightcap assured staff in an internal memo that the departure was not due to corporate malfeasance, but hinted instead that it could be a clash of personalities and dueling goals for the company.

“It highlights how hard governance is,” says Jeremy Howard, cofounder of FastAI, an AI company, and digital fellow at Stanford University. “The board is getting yelled at for literally doing [its] job.” OpenAI’s corporate structure is such that the firm’s potential profits are capped, and the board overseeing it is required by its forming rules to act in the best interests of an associated nonprofit.

 

Howard points out that the supposed reasons for Altman’s summary firing last week appeared to highlight how those two corporate entities are in direct contrast with one another. “They have a charter to follow, so that’s what they’re doing,” he says. “And people are mad at them, a nonprofit, for not focussing on profit.”

Complicating matters further are the interpersonal foibles within OpenAI. The Information first reported tensions between Altman and some within the company, notably cofounder and chief scientist Ilya Sutskever, over different visions for how the company should evolve. 

Concerns over the pace of AI development are not unique to OpenAI. “Building products that people will use is not about having the fanciest, flashest model, but having the most reliable product that can be built into the systems that are influencing our lives,” says Rumman Chowdhury, chief scientist at Parity Consulting, a tech consulting company. “In order to make decisions on what AI products should be released, we need empirical evidence to help drive decisions.”

Safety and profit-making can sometimes act in opposition to one another. “They chose a CEO whose background is entirely in the profit-making-startup and VC industries and a CTO with a fintech background, and they provided most compensation to employees based on ‘profit participation,’” says Howard. “It’s a recipe for disaster.”

Noah Giansiracusa, a professor of mathematics and data science at Bentley University, who has been tracking the AI sector, is similarly equanimous about the inherent contrasts in the goals of the profit and nonprofit arms of OpenAI. “It sounds like part of the issue is the board felt Altman was too focused on commercialization and rushing products to market, which bothered some of the more die-hard AI safety types,” he says. “But if that’s their feeling, I wonder why they chose a guy whose background is tech startups and business investments as their CEO.” (Altman, notably, was president of startup accelerator Y Combinator between 2014 and 2019.)

For Giansiracusa, Altman’s task was always going to be challenging, like riding a lightning bolt. “Leaders of AI firms, especially ones aiming high with talk of things like AGI, have to walk a tightrope: People get upset and leave if they’re too slow and cautious, people get upset and leave if they’re too rushed and reckless,” he says. “Altman was moving too fast for the tastes of some and too slow for the tastes of others. The current membership of the board seems to view him as moving too fast.” It can be easy to forget that OpenAI released ChatGPT just 354 days ago. 

Fast Company

(23)