Security

Silicon Valley leaders are calling Sam Altman’s firing the biggest tech scandal since Apple fired Steve Jobs—but the leading theory about the OpenAI drama tells a different story


Apple lost its founder, 30-year-old Steve Jobs, in 1985, a famous moment in tech and business history, as the maker of the Macintosh parted ways with the face of personal computing, over a decade before their fateful reunion, the iPod, the iPhone, the $1 trillion valuation and all that. Now AI has its own “Steve Jobs: Act One” moment, as Sam Altman, the 38-year-old face of the AI boom, has been fired by OpenAI’s board for the unexplained sin of being “not consistently candid in his communications.” Act Two is sure to follow, but, just like Jobs’ expulsion from the company he cofounded nearly four decades ago, the exact reasons for the firing are still shrouded in mystery.

Tech watchers are drawing the comparison. “What happened at OpenAI today is a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs,” legendary angel investor Ron Conway posted on X late on Friday. “It is shocking; it is irresponsible; and it does not do right by Sam & Greg or all the builders in OpenAI,” Conway said. 

Bloomberg writer Ashlee Vance made the same comparison, posting: “This is like Apple firing Steve Jobs only they’re doing it after the iPhone has become the best selling computer in history.”

A day after Altman’s sudden dismissal, it remains a mystery why the most important company in the AI-driven “fourth industrial revolution” abruptly dismissed its superstar CEO. Altman, who co-founded OpenAI in 2015 after running the prestigious tech incubator Y Combinator, has presided over the startup as its ChatGPT bot surged to popularity. Unusually for a tech founder, Altman had no equity stake in OpenAI and so didn’t exercise the type of control other founders like Mark Zuckerberg are known to do; unlike other tech leaders, Altman’s fame came not from his engineering brilliance but from his ability to raise large amounts of money and his bets on ambitious, world-changing technology


There are undeniable parallels with the Steve Jobs story. Jobs founded Apple in 1976 with Steve Wozniak when Jobs was just 21. Four years later, Jobs was worth $200 million; the following year, he made the cover of Time

By the time he was 30, Jobs was running Apple as co-CEO alongside John Sculley, whom he had recruited from a marketing role at Pepsi. But Jobs’ hard-charging personality and zeal for perfection clashed with Sculley and Apple’s board members. Jobs was “uncontrollable,” according to one early Apple board member; Scully, in a later memoir, excoriated Jobs as “a zealot, his vision so pure that he couldn’t accommodate that vision to the imperfections of the world.” 

The tensions came to a head in 1985 after sales of two Apple products — the Lisa and the Macintosh — failed expectations and Sculley and Jobs brought their differences to the board. The board sided with Sculley, and Jobs immediately quit (depending on whom you ask—others say he was fired). That same day, Jobs filed incorporation papers for Next Computing, which he would run for the following decade. Apple bought the company in 1997, setting the stage for Jobs’ triumphant return 12 years to the day after his ouster. 

The exact cause of the messy Altman/OpenAI divorce is still unclear, but one leading early theory points to tensions related to OpenAI’s nonprofit origins and its current status as one of the powerful tech companies in the world — a strain that overlaps with a broader AI industry schism between “accelerationists” and the “doomers.”

The fateful Friday afternoon

Speculation is rife over the reason for Altman’s surprise firing on Friday and the subsequent resignation of President Greg Brockman a few hours later.

“Sam and I are shocked and saddened by what the board did today,” Brockman posted on X. According to Brockman, OpenAI co-founder and Chief Scientist Ilya Sutskever asked Altman to join a video meeting with the board at noon on Friday, where they informed Altman he was being fired. Brockman, who was not part of the meeting, was stripped of his chairman title as part of the leadership overhaul, but the board planned to keep him on staff, according to OpenAI’s statement. Since then, three senior scientists have resigned from the company, Ars Technica reports.

With the departure of Altman and Brockman, Sutskever is the only one of the company’s founders who remains at OpenAI. (Another co-founder, Tesla CEO Elon Musk, stepped back in 2018, citing a conflict of interest between OpenAI and Tesla’s autonomous ambitions, though some reports say it was due to a power struggle.) 

OpenAI’s board of directors includes Sutskever; Quora CEO Adam D’Angelo; Tasha McCauley , a tech entrepreneur and adjunct senior management scientist at the RAND Corporation, and Helen Toner, director of strategy at Georgetown University’s Center for Security and Emerging Technology. (Three other board members—Texas congressman Will Hurd, Neuralink director Shivon Zilis, and LinkedIn co-founder Reid Hoffman—stepped down earlier this year.)

From do-gooder startup to tech juggernaut

As The New York Times put it at OpenAI’s formation in 2015, OpenAI was explicitly set up as a nonprofit artificial-intelligence research center, with the specific goal of developing a “digital intelligence” to benefit humanity. Fast forward eight years to early 2023, and ChatGPT was exploding into the mainstream consciousness, becoming the most quickly adopted technology in history after its late 2022 launch and sending a shock through a Wall Street weathering the first bear market in decades. OpenAI’s big-bang moment was its announcement of a $10 billion investment from Microsoft in January, a huge payday that elevated Altman as the face of AI and instantly upstaged incumbent AI powers like Google and its DeepMind subsidiary subsidiary. AI’s benefits to humanity quickly became the big debate point. 

On one side, there are so-called accelerationists, who see the productivity gains from this near-magical tech breakthrough as the next leap forward for capitalism. Top tech analyst Dan Ives from Wedbush Securities dubbed it “the fourth industrial revolution” and compared it to the mid-1990s dotcom boom, rather than the late ‘90s busted bubble. MIT’s Erik Byrnjolfsson, an economist specialized in tech and its impact on productivity, sees work getting twice as efficient in the next decade due to AI. The venture capital community in Silicon Valley has enthusiastically backed this argument, with SoftBank’s Masayoshi Son moved to tears as he described AI giving birth to a “superhuman,” and Marc Andreessen writing an eccentric, much-criticized “techno-optimist manifesto.”

On the other end of the philosophical spectrum are the “doomers.” For the doomers, all the rosy predictions of AI utopia are inextricable from the reverse: that AI has the Terminator-like potential to rebel against its maker and poses an existential risk to humanity. (There is also the semi-doomer complaint that the tech will displace millions of workers from their jobs and fuel even greater disinformation and media disintegration.) Foremost among the doomers, perhaps surprisingly, is OpenAI co-founder Elon Musk. He famously quit the nonprofit because he believed it was straying too far from its original mission, and has repeatedly warned that the technology is fundamentally dangerous to humanity.

And you don’t need to be an alarmist doomsayer to be concerned about the risks of AI. For all of ChatGPT’s explosive success, its tendency to “hallucinate” answers when prompted (i.e., repeat false information) has never gone away. In fact, “hallucinate” was the word of the year in the Cambridge Dictionary.

As more details emerge the OpenAI drama from investors, employees, and other parties, the philosophical divide within the organization looks like an important aspect of what led to Altman’s ejection. A person with direct knowledge of the matter told Bloomberg that Altman and the board clashed over the pace of development, the method of commercializing products and how to lessen potential harms. The New York Times’ Kevin Roose reported hearing from several current and former OpenAI employees that Altman and Brockman “could be too aggressive when it came to starting new products.” Kara Swisher has sources saying things to similar effect.

And OpenAI’s corporate structure is based upon the doomer-friendly philosophy: The company retains the nonprofit’s mission and board, which oversees a capped-profit subsidiary, established in 2019. OpenAI’s directors are not bound to Milton Friedman-style shareholder theory, but rather to create “safe AGI (artificial general intelligence) that is broadly beneficial.” If Altman was ejected in a boardroom coup, as some have described it, the board’s putative mission and mindset is probably relevant to the events.

Altman, who was totally blindsided by his firing, having represented OpenAI publicly at the APEC summit earlier this week in San Francisco, with President Joe Biden, Microsoft’s Satya Nadella and Google’s Sundar Pichai, has been talking up OpenAI’s commercial possibilities of late. At APEC, Altman said he was “super excited” about AI, “the greatest leap forward of any of the big technological revolutions we’ve had so far.” While saying he understood the concerns of doomers, name-checking the historian and public intellectual Yuval Harari, Altman nonetheless confirmed himself in the accelerationist camp, comparing AI to “the Star Trek computer I was always promised and didn’t expect to happen.”

So it’s possible that Altman was fired by the equivalent of idealistic nonprofit directors, who may have thought he was straying too far from OpenAI’s “beneficial” mission. If that turns out to be the case, similarities between Apple and OpenAI aside, AI’s “Steve Jobs moment” augurs an altogether stranger next chapter to be written in Silicon Valley history.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.