The recent OpenAI debacle showed us what humanity must stay wary of
Jaspreet BindraMaureen Dowd writes of a Twilight Zone episode in New York Times (https://nyti.ms/413XeOe), where aliens who have recently landed in earth give world leaders a book to signal their intention for peace and cooperation. Earth’s experts work hard to translate the alien language used and are relieved when they interpret the book title: “To Serve Man”. They welcome the aliens enthusiastically, but realise very soon, and too late, that the book is actually a cookbook!
The OpenAI Board must have been watching this episode, when they suddenly and inexplicably decided to dismiss their charismatic Founder-CEO Sam Altman in a half-hour of frenzied activity. The reverberations felt around the planet were similar to an alien landing, as the tech world and journalists tried to make sense out of this completely unexpected action. The Board gave a cryptic, but damning reason that Altman was not “consistently candid in his communications with the board” but clammed up afterwords. In this vacuum, various theories started floating. There were whispers about Altman “lying to the Board.”. Observers felt that he would have done something grossly inappropriate, but that was promptly refused by the Board as well as OpenAI’s Chief Scientist. An outrageous theory floated was that Sam and team had actually discovered AGI (Artificial General Intelligence), and the Board was shaken by the implications leading to instant dismissal; a new development called Q Star (sounding dangerously like QAnon) was named as that AGI! There were dark mumblings of household incest in the Altman family, and even conspiracy theories that Microsoft, the biggest investor in OpenAI, was behind this action to take more control.
In fact, no one was more shocked than Microsoft, and its charismatic CEO Satya Nadella. Nadella and his company had gone all-in with Altman and OpenAI to develop the next generation of its software productivity products called Copilots. Meanwhile, it started emerging that the reason for the abrupt breakup was more prosaic: internal boardroom intrigue. One of the members, Helen Toner, who is also a director at the Center for Security and Emerging Technology at Georgetown University, had co-written a paper that seemed to pan OpenAI for “stoking the flames of AI hype.” Altman was miffed and, besides remonstrating with Toner, reportedly started conspiring with other Board members to remove her. However, four of the members (excluding him and another co-founder Greg Brockman) joined ranks and decided to remove Altman instead. A surprising member of this quad was Ilya Sutskever, another cofounder and a close friend of Altman. Reports suggest that Sutskever, who was a purist as far as AI research and development goes, was worried about the pace and direction of OpenAI’s product launches and had both privately and publicly warned against it.
The rest of the story is well-known: Altman was back as CEO in five days, not least due to a brilliant masterstroke played by Microsoft to hire him and extend an open offer to the rest of OpenAI’s 700 employees, with 95% of them threatening to accept it. What is more significant is how this plays out going forward. The story here is not as much about a tiff between Board members, but a much more existentialist question on how this very powerful technology ought to be developed. The Board, thought its admittedly ham-handed actions, still claimed to be doing this for ‘humanity’ as it was worried about AGI and safety. Ranged against it were powerful economic and commercial instincts of value-creation and profits. OpenAI was deliberately structured non-conventionally, empowering the Board to act if they felt that humanity was in danger, but their gambit failed.
In many ways, this is a familiar story and I have often written about this: the problem is usually never the technology, but the business model. This is true in social media, where the data-monetization business model has made social networks into the morass that they are. It is what makes Elon Musk
rail against advertisers, who seem to be “blackmailing” him and his noble intentions for X. Google succumbed to the innovators dilemma for similar reasons, afraid that the transformers and LLMs that they discovered could hurt the lucrative business model for Search. And so it will be for Generative AI and OpenAI. The company twisted and turned itself into a pretzel-like corporate structure enabling it to raise billions of dollars from Microsoft, and reward employees with a predicted $86bn valuation, while simultaneously claiming to build AGI for the good of humanity, and not necessarily for economic gain. These divergent fissile forces could not be sustained forever, the strain became too much; and the model broke that fateful week. The outcome was clear: the capitalistic business models had won over the quest for pure, societal good. The aliens are already amongst us, and humanity should be worried.