Skip to main content

Funding AI: Balancing Speed and Safety

US Capitol

Senate Majority Leader Chuck Schumer (D-NY) is leading a bipartisan bill that would offer massive funding for artificial intelligence. The core driver behind the bill is to make sure that the US maintains its competitive lead in this space.  

There is some focus on issues of safety and harm, particularly in the world of deepfakes and consent. That is, regulate the use of deepfake technology and make sure it's not used to create pornographic content without consent. There's also a broad brushstroke notion of privacy and a requirement that government agencies demonstrate using AI verify that the tools do not endanger people’s rights and safety. 

And the bill is also aimed at some funding looking at how AI will impact the future of work. 

But there is a tension in the ideas within the bill.    

The notion behind the funding is to help maintain our national competitive advantage in AI. This includes not only our commercial advantage, but also homeland security and the military. 

At some point, however, we have to realize the tension between doing the work that will maintain our competitive advantage and doing the work that will keep things safe. 

If we move as fast as we possibly can to make sure we stay ahead, then we can't spend the time to do the deeper analysis needed to assess impact, like finding out why and where there are problems and how to mitigate and eventually regulate those problems. That’s what is slightly concerning about the bill. 

In his comments, Sen. Schumer kept reiterating, “This is moving so fast.” Yes, it's moving so fast, but that doesn't mean that we have knee-jerk responses to anything. We can be more thoughtful. 

The Senate had engaged a task force focused on AI, but the task force was primarily technologists explaining the technology.  

That was necessary, but there were not enough researchers involved who look at issues of safety and harm in general and even fewer looking at sociotechnical metrics that attend to the human impact of these systems. 

And, because it’s forward looking, the bill doesn’t consider harms that already exist and how we can do a better job of regulating the systems that cause them.  

There was a side reference Schumer made, “We don't want to have things happen with AI like what happened with social media,” meaning we don’t want to ignore possible harms until it is too late. But there was not the needed follow-up idea that we need to go back to what we allowed to happen in the past and try to fix the harms that already exist. 

In general, the bill pushes for more funding for AI. The big question is how is that funding going to be apportioned.  

Is it going to flow to the academic world? The commercial world?  

Is it going to be used to move the technology forward? Make the technology more accessible?  

Or is it just going to increase and give us more technology without considering the risk? 

Kristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program

Back to top