More than 95% of OpenAI employees are considering quitting after the company fired co-founder Sam Altman. This major upheaval at OpenAI, a leading company in the AI industry, highlights internal chaos and has wide-reaching effects beyond the company. It’s influencing investor attitudes, how competing firms are responding, and is driving more urgent conversations about AI regulation in Europe.
According to the Financial Times, which has sources with direct knowledge of the matter, 747 of 770 employees have signed a letter stating they might leave to join Microsoft:
…unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.
743 now. >95% of the company https://t.co/7MMUvWaTWq
— Evan Morikawa (@E0M) November 20, 2023
The letter stated that the conduct of the board of directors at OpenAI has “made it clear you did not have the competence to oversee OpenAI.”
It also said that:
The leadership team suggested that the most stabilizing path forward – the one that would best serve our mission, company, stakeholders, employees and the public – would be for you to resign and put in place a qualified board that could lead the company forward in stability. Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
Contextual Backdrop: Venture Capitalists and Leadership Changes at OpenAI
This unrest is not only among employees but also among some of the venture capitalists who have a stake in OpenAI. These investors are looking into legal options to reverse the board’s controversial decision.
The issue is further complicated by Ilya Sutskever, OpenAI’s chief scientist and last remaining co-founder on the board, who has shown support for the employees and signed the letter after publicly apologizing on X for his part in Altman’s firing.
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
— Ilya Sutskever (@ilyasut) November 20, 2023
This comes on the heels of Emmett Shear, known for co-founding Twitch, being appointed as the interim CEO. This marked the third leadership change at the AI firm within just three days.
The leadership upheaval began with the dismissal of Sam Altman, the previous CEO, over issues reportedly related to transparency in communication with the board. The situation escalated quickly, reflecting the high stakes involved, given OpenAI’s prominence in the tech industry and its backing by influential players like Microsoft.
The aftermath of Altman’s departure has been turbulent. OpenAI employees expressed strong discontent, hinting at a potential mass exit in solidarity with Altman. This sentiment was further fueled when Satya Nadella, CEO of Microsoft, announced that Altman, along with resigned president Greg Brockman and several key researchers, would be joining Microsoft to lead a new AI research team.
We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners. We look forward to getting to know Emmett…
— Satya Nadella (@satyanadella) November 20, 2023
Industry Ripple Effect: Competitors Rush to Hire OpenAI Workers
Amid these leadership challenges, companies like Anthropic and Cohere saw increased interest from OpenAI’s clients, who are exploring alternatives. Furthermore, these rivals are actively seeking to attract OpenAI’s talent, with an investor noting their keen interest in OpenAI’s staff.
For example, Marc Benioff, the CEO of Salesforce, reached out to OpenAI researchers via X, encouraging them to apply at Salesforce and promising to match their salaries.
Similarly, Mustafa Suleyman, the founder of AI start-up Inflection, commented on the situation at OpenAI, while highlighting that Inflection is expanding.
Utterly insane weekend. So sad. Wishing everyone involved the very best.
In the meantime, we finished training Inflection-2 last night! ✨
It's now the 2nd best LLM in the world… & we're scaling MUCH further. Details v soon.
Come run with us!
— Mustafa Suleyman (@mustafasuleyman) November 20, 2023
These are aside from Nadella’s offer for all OpenAI employees to join Microsoft. The Microsoft CEO told the “On with Kara Swisher” podcast:
We will definitely have a place for all AI talent to come here and move forward on the mission, and we will be supportive of whoever remains even at OpenAI or whatever.
In the episode, Nadella also expressed that OpenAI should have at least discussed it with Microsoft before deciding to remove Sam Altman, considering Microsoft’s significant investment in the company.
Microsoft has poured billions into OpenAI, leading to a technology-sharing partnership. Microsoft’s Bing search engine utilizes OpenAI’s ChatGPT, and in return, OpenAI relies on Microsoft’s Azure cloud services.
European Response: The EU AI Act and Regulatory Perspectives
The dramatic events at OpenAI have not gone unnoticed in Europe, where lawmakers are keenly aware of the implications for AI governance.
Brando Benifei, a key figure in the European Parliament shaping AI legislation, emphasized the repercussions of Altman’s dismissal in a statement to Reuters.
The understandable drama around Altman being sacked from OpenAI and now joining Microsoft shows us that we cannot rely on voluntary agreements brokered by visionary leaders. Regulation, especially when dealing with the most powerful AI models, needs to be sound, transparent and enforceable to protect our society.
France, Germany, and Italy Agree on AI Regulations Proposal
On Monday, according to a Reuters report, France, Germany, and Italy came to a consensus on the regulation of AI. This development is anticipated to speed up discussions at the European level.
According to a paper seen by Reuters, these countries advocate for mandatory self-regulation through codes of conduct, particularly for foundational AI models. While supporting the notion that AI technology itself is not inherently risky, the paper underscores the need for clear guidelines in the application of AI systems.
The proposal suggests the implementation of model cards, which provide crucial information about a machine learning model’s functionality, capabilities, and limitations. These guidelines, initially without sanctions, are seen as a starting point for a more structured regulatory framework.
Their idea is to use “model cards” for AI. These cards will provide crucial information about what an AI can do and what its limits are. For now, there will be no penalties for not following these rules, but this is just the beginning of creating a robust regulatory framework for AI.
Contextualizing the EU AI Act
These developments occur against the backdrop of the European Union’s broader effort to pass the AI Act. This comprehensive legislation aims to regulate the use of AI across the EU, mandating risk assessments and data transparency for companies.
The AI Act’s negotiation, involving the European Commission, the European Parliament, and the EU Council, has faced challenges, particularly regarding the extent of self-regulation allowed for companies. The unfolding situation at OpenAI, highlighted by Altman’s controversial departure and the subsequent reaction, has only served to fuel the urgency and complexity of these discussions.