Skip to main content

The AI Summarization Dilemma: When Good Enough Isn’t Enough

Robot writing at desk

A recent article in The Intelligencer highlighted a growing concern with AI summarizations, particularly the fact that these models aren’t perfect. For instance, when summarizing your email, a model might get details like a date or location wrong. These seemingly minor errors can have significant consequences, especially when we rely on AI to handle information that we would otherwise read ourselves. 

The primary purpose of AI summarization is to help us manage the overwhelming amount of information we’re inundated with daily. It’s for those documents, emails, and notifications that you don’t want to read but still need to keep track of. The problem is that if you’re using AI because you don’t want to do the reading, you’re also unlikely to catch its mistakes. This isn’t just a minor flaw; it’s a task issue. If you need something to be 100% correct, AI summarization isn’t the tool for the job. However, in cases where being 90% right is acceptable, these tools can be incredibly effective and can significantly streamline workflows. 

Consider the situation in Wyoming, where mayoral candidate Victor Miller proposed using AI as the real mayor. His idea was that AI could read and summarize legislation for him. The point was, he didn’t want to read the documents himself, meaning that there would be no checks on the system and its accuracy. In the end, voters decided they preferred human candidates in the primary race, rejecting the notion of an AI mayor.  

This brings us to a crucial question: What level of accuracy are we comfortable with? In some contexts, being mostly right is good enough—brainstorming ideas, for instance. But in others, the consequences of errors can be dire. Imagine AI summarizing a conversation between two doctors discussing a patient. If it gets the core semantics right, that’s fantastic. But what if it mixes up drug names or diseases? The implications could be catastrophic. 

Consider a car analogy. If your car’s windows malfunctioned 1% of the time, it might be annoying, but you could live with it. However, if your brakes failed 1% of the time, that would be unacceptable—potentially fatal. The same applies to AI summarization: it all depends on the nature of that task and the potential impact of failure within that context. 

While AI summarization can be a powerful tool in managing information overload, we need to carefully consider when and where it’s appropriate to use. The stakes are too high to leave everything to chance—or to a machine. 

Kristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program

Back to top