Skip to main content

OpenAI’s Quest for Artificial General Intelligence: Vision or Mirage?

Stairs at night

OpenAI has announced a plan to achieve artificial general intelligence (AGI) within five years, an ambitious goal as the company works to design systems that outperform humans. However, as we examine OpenAI’s five-level roadmap, it’s worth asking: Is this plan a genuine pathway to intelligent machines, or just another iteration of the Turing test—a benchmark that is more game than goal? 

OpenAI’s strategy is structured around five levels:  

Chatbots: AI with conversational language capabilities. While chatbots can mimic human conversation, they often lack genuine understanding or reasoning abilities, making this level more about creating the illusion of intelligence than achieving it. 

Reasoners: Systems that exhibit human-level problem-solving skills. True reasoning involves understanding context, drawing inferences, and solving problems—all crucial aspects of intelligence that go beyond holding a conversation.  

Agents: AI systems that can assist in complex tasks. These agents perform actions based on reasoning and interaction, making them valuable in applications ranging from virtual assistants to automated customer service. 

Innovators: Machines that can contribute to creative processes. AI innovators could revolutionize fields like drug discovery and materials science by providing insights and solutions beyond human capability. 

Organizations: AI that can perform the functions of entire organizations. This level raises ethical and practical concerns about control, accountability, and the potential for unforeseen consequences. 

One of the perennial problems in AI is the difficulty in defining intelligence. Alan Turing sidestepped this issue by proposing the Turing test, which measures a machine’s ability to exhibit human-like conversation. However, as history has shown, the Turing test can be "gamed." Systems have been designed specifically to pass this test without truly understanding or generating intelligent responses. 

The OpenAI plan suffers somewhat from this same problem. It is an attempt to provide benchmarks in the development of AI or levels sophistication without defining what we mean when we say “intelligence.” 

Another striking aspect of OpenAI’s roadmap is the lack of consistency between the levels. "Reasoning” and “Running an organization” feel like different kinds of classification. One defines a skill (“Develop high level reasoning skills.”) while the other feels like an employment category (“Get a job in middle management.”). These feel less like scientific or engineering categories and more like marketing. 

OpenAI’s five-year timetable reminds me of Marvin Minsky’s prediction at the 1956 Dartmouth Conference that defined AI. MIT (Massachusetts Institute of Technology) had bought Minsky’s lab a digital camera, and he thought his students would solve machine vision over the summer. A prediction that flowed from a lack of engagement with the problem and an endeavor that has taken decades to reach maturity. 

While OpenAI’s plan is ambitious, the practical implications of these levels warrant consideration. What do these levels mean for society? How will they impact various sectors? For example, AI that can assist in invention could revolutionize fields like drug discovery and materials science. However, the idea of having AI systems running organizations raises ethical and practical concerns about control, accountability, and the potential for unforeseen consequences. 

Ultimately, the quest for AGI should not just be about reaching predefined levels but about fostering collaboration between humans and machines. Human intelligence thrives on collaboration—each of us brings unique experiences and expertise that, when combined, lead to innovation and progress. Similarly, the most successful AI systems will be those that enhance human capabilities rather than replace them. 

OpenAI’s five-year plan to achieve AGI is undoubtedly a bold vision. Yet, it is essential to recognize the complexities and challenges inherent in defining and measuring intelligence. Rather than focusing on ticking off levels on a roadmap, we should aim to understand and develop the underlying functionalities that make AI genuinely intelligent. By doing so, we can create AI systems that not only mimic human abilities but also enhance our collective potential. 

Kristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program

Back to top