The Open Letter: A Disappointing Focus on Legalities Over Critical Issues
Former employees of OpenAI have penned an open letter warning the public about the serious risks posed by the technologies. Before I read the letter, I thought it would detail the pressing concerns about AI that we should be worried about, such as bias and inequalities, the spread of misinformation, and the risk of losing control over autonomous systems. These are the real dangers that we should be working to mitigate.
But instead, the letter was about regulations and non-disclosure agreements (NDAs) that the authors were forced to sign when they worked at OpenAI. That’s a problem. These former employees are worried about a piece of paper they signed, instead of warning people about things that are truly damaging to society. This is baffling.
Remember the 1960 movie “Spartacus”? There’s a powerful moment when the Romans find Spartacus and his army and offer to spare them all if they give up Spartacus. One by one, they all stand up and say, “I’m Spartacus.” They stand up for what they believe. They do something.
I'm touched and moved that the former OpenAI employees want to have protection for people who stand up for themselves and their values, who call out what they see as wrongs. But I would be more moved if they started by standing up on their own right now.
What are they so worried about? OpenAI taking all their money? They can write books. These are smart people. They’ll always find something in the world. They may think they’re going up against the richest people in the world, and they are. But what have they got to lose? Everything? Sure. But what do they have to gain? They’d be rewarded for taking the steps that they seem to believe are needed to make the world safer in the face of unchecked development of AI.
While the letter touches on important points about protecting whistleblowers, it falls short of addressing the substantial issues with AI that need urgent attention. Their focus on legal protections, while necessary, seems to miss the larger, more critical conversation about the ethical and societal impacts of AI. It's time for these brilliant minds to channel their courage towards more pressing concerns that have far-reaching consequences for humanity.
Kristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program