Here’s what just happened, sorry for the long paragraph: On Friday, Sam Altman was abruptly fired by the board of OpenAI because he had not been “consistently candid in his communications”, and chief technology officer Mira Murati was named as interim chief executive. On Sunday, amid widespread dismay among staff and investors, Altman and a group of allies held hours-long negotiations over his possible return, but by the end of the day, a new permanent chief executive, Emmett Shear, had been appointed. On Monday, Altman was hired to run a new AI unit at Microsoft – which owns 49% of OpenAI. Meanwhile, staff wrote an open letter to the OpenAI board saying that they would quit and join Altman at Microsoft unless he was reinstated; ultimately, about 750 of 770 staff signed. Late on Tuesday night US time, Altman was reinstated, his opponents on the board were replaced, and Microsoft said it was happy with the outcome. And yesterday, ousted board member Helen Toner said: “We all get some sleep.” Here’s what all that means, and whether or not you should be any more worried about being squished by a computer than you were a week ago. What made OpenAI different The exact nature of Altman’s alleged lack of candour has not been made public, but reporting in the days since he got the boot suggests that it came down to a fundamental feature of OpenAI’s corporate structure, almost unique among its peers: it has a nonprofit board – tasked with ensuring the company develops AI for the benefit of humanity above its investors – in charge of a for-profit company. (Those profits are still capped, though, at 100 times the initial stake for first-round investors; any excess goes back into the nonprofit.) Altman is one of those who designed the set-up that ousted him. In theory, it should have preserved the idealistic intentions of the company’s origins as a research lab while allowing the dizzying investment needed for any major AI player. Still, that balance is an expression of a fundamental tension in AI: how do you develop a world-changing technology at a competitive pace while mitigating the risks that the profit motive is bound to entail? In practice, though, Altman’s removal suggests the limits of that model. “Having a check and a balance on the for-profit side isn’t necessarily impossible, but it has been so dysfunctional that it just didn’t work,” said Chris Stokel-Walker. “If you want to compete, you have to go into the market for investment, and that is going to shape how you think about things. You might want to develop a slow-moving, carefully crafted company that is focused on safety, but if a massive investor says that it needs something from that, do you have the spine to say no?” The ‘great man’ problem However stupidly, tech companies are often reduced to the influence of a single wunderkind (Jobs, Zuckerberg, Musk, Alan Sugar). The problem for the board of OpenAI was that it failed to recognise something that seemed pretty obvious to everyone else: Altman was one of those guys, and the company was worthless without him. There are good reasons to question whether that should be true, or whether anyone should be treating OpenAI’s fate as a big deal anyway. As Max Read points out, it is “one of many A.I. companies working on fundamentally similar technologies” whose transformative potential is yet to be realised; Altman himself “has never demonstrated a particular talent or vision for running a sustainable business”. Even so, he comes out of the week far more powerful than he went into it. “It turns out that OpenAI is essentially Sam Altman now – we’ve seen that through the messianic following he’s managed to engender among his employees,” Chris said. “Silicon Valley is obsessed with troubled geniuses. The problem of a lot of the discussion of what happened is that it inevitably lionises this idea of a single, invariably male, godlike figure who can make the weather and dictate every big decision. And so you have the psychodrama of the interpersonal conflicts, but much more fundamental questions get overlooked.” The risks of AI and managing them You will be familiar with the darkest fear about artificial intelligence, the sort of thing you briefly contemplate over breakfast before choosing to focus on your muesli: the idea that it could ultimately be powerful and uncontrollable enough to pose an existential threat to humanity, and enslave or make mincemeat of us all. The most apocalyptic versions of that concern are, in truth, “a very niche belief within the industry”, Chris said – but there are many ways that the technology could cause harm that fall short of humanity’s extinction, from ubiquitous misinformation to the hollowing out of industries that employ millions. One urgent concern, Chris said, “is that AI is already trained on biased data – on the majority of the internet which is English language, middle class, male, from economically developed countries”. Altman himself has been cautious in the past about some of the risks – one of the reasons his company is set up the way it is. OpenAI now looks like evidence of how difficult it is to keep guardrails in place even at a company notionally devoted to their maintenance. Meanwhile, Meta just broke up its “Responsible AI” team. What OpenAI and the industry look like today It’s pretty hard to imagine a more bungled boardroom coup than the one that ultimately saw Altman back in his job, and more secure in it than ever. The composition of the new board suggests it is likely to be more focused on the company’s business interests, and less preoccupied with safety concerns, though its members are governed by the same nominal “humanity over investors” rules as their predecessors. “But the barrier to intervening again is very high,” Chris said. “I doubt we’re going to see another coup in a week.” AI will keep trucking on and keep freaking everyone out whether or not OpenAI succeeds. Even if the company’s fate is mostly a matter of significance to the people invested in it, though, there are wider implications. “OpenAI’s board got almost everything wrong,” Platformer’s Casey Newton notes, “but they were right to worry about the terms on which we build the future, and I suspect it will now be a long time before anyone else in this industry attempts anything other than the path of least resistance.” --- To keep up to date with all the latest developments in the AI apocalypse sign up here to our weekly technology newsletter, TechScape. |