What did Ilya see?

May 17, 2024

"What did Ilya see?" This question has haunted tech Twitter for months now, since the improbable series of events that led to the removal and then reinstatement of Sam Altman at the helm of OpenAI. What may be the reason behind Ilya's initial decision to back the board and expel Sam Altman? Wasn't he feeling the AGI?

For months, it was challenging to decipher the chaotic series of events that unfolded. The board was perceived as erratic and acted without any foresight. Their alliances were not strong. Sam Altman was backed by Microsoft, and the board's allegations were not sufficient to justify his removal. Remember, unlike OpenAI, Microsoft is a for-profit company that greatly benefits from the AI boom sparked by ChatGPT. It's understandable why they would be against any disruption.

Jan Leike announced his resignation from OpenAI on Twitter recently. He shared a detailed thread explaining his reasons a few days later. The announcement of this resignation came almost at the same time as Ilya breaking his silence to announce his departure from OpenAI. After reading the more detailed thread, one can finally begin to understand what actually happened at OpenAI.

It is less about what Ilya saw but more about what he and like-minded people (presumably including Jan) foresaw. Ilya and Jan envision a future where AI systems become more powerful, increasing the need for alignment and safety. That's why they were co-leading the superalignment team, and why there was a clash between the priorities of safety-first proponents and those eager to release shiny new products to win the AI race and push the frontier. Sam Altman was likely trying to keep these two factions together by occasionally lying like a politician and telling everyone what they wanted to hear, but the main rift seem about safety and alignment.

Capital, money, and power have the potential to corrupt a company's original mission. OpenAI was created as a nonprofit tasked with ensuring safe AI development, but now, given the capital required to build AI models, the partnership with Microsoft, and the expectations it creates for both sides, the AI race always to release and maintain the best model out there, competitive pressures mean OpenAI is no longer as open or safety-oriented—if we are to believe Ilya and Jan. It is plausible that these two might collaborate on something related to super alignment in the future.

So, the core rift at OpenAI is about safety. Is OpenAI not doing the right thing or were Ilya and Jan afraid of what they helped create and overreacted? On Twitter, OpenAI receives regular criticism for 'nerfing' its models for safety reasons, which has had an impact on their performance. Powerful open-source alternatives like Mixtral and Llama3 are now available, even if these models are considered less 'safe. ' How can we reconcile this with the belief of some insiders that they are not doing enough for safety? Furthermore, are the AI doom scenarios really warranted? We now have reports suggesting that GPT-4, the latest flagship multimodal model from OpenAI, is not performing as well as initially thought, indicating that LLMs with current architectures may be hitting a plateau. If that's the case, is the importance of superalignment maybe a little overblown?

AI undoubtedly poses safety issues, and it is important to consider the potential for nefarious actors to leverage these technologies for misinformation or other harmful acts. The core question persists: Are large companies like OpenAI doing enough, or is open source the authentic solution to these safety issues, as advocated by certain tech leaders? This is the fundamental rift that divided OpenAI.