Skip to main content

Microsoft and Apple Won't Join OpenAI Board Seats: A Setback for AI Progress

Apple and Microsoft leave board room

In a significant move, Microsoft and Apple have decided to step back from their direct involvement with OpenAI. Microsoft has withdrawn from OpenAI’s board observers, and Apple has abandoned its plans to join, signaling a major shift in AI collaboration and raising questions about OpenAI's future direction. 

Microsoft is still deeply integrated with OpenAI, providing financial support and strategic guidance. Apple is also involved with its deal to integrate OpenAI’s technology into its products. One can understand if their withdrawal was due to concerns over data utilization and liability issues. The European Union's stringent data privacy regulations have increased scrutiny on how data is handled, prompting large corporations to avoid legal risks.  

Beyond legal and regulatory factors, this move reflects deeper concerns about OpenAI's direction. OpenAI's board previously included individuals dedicated to AI safety and responsible development, but they resigned due to a shift towards aggressive growth, prioritizing scale, and competitiveness over ethical considerations. But leaving a leadership position because you are concerned about the direction won’t bring about change. Staying to take care of business in the face of that change is what leadership is about. 

This highlights a broader issue in the tech industry, where the race to develop the most advanced models often overshadows the question of what these technologies are meant to achieve. OpenAI's ambitious goal to build artificial general intelligence (AGI) within five years exemplifies this mindset. However, pursuing the "best, brightest, shiniest new system" without considering societal impact and ethical ramifications is troubling. 

The departure of Microsoft and Apple from OpenAI's board is a wake-up call. It underscores the need for a balanced approach to AI development, valuing ethical considerations as much as technological advancements. The tech industry must prioritize solving real-world problems over theoretical achievements. Only then can we ensure AI progresses in a way that aligns with society's best interests. 

Ultimately, the goal should not be to build the most powerful AI, but the most responsible one. As we move forward, it is crucial to remember that the true measure of success in AI lies not in technological prowess but in its positive impact on the world. 

Kristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program

Back to top