Skip to main content

OpenAI: The Battle over Search

AI absorbing information

The battle for the future of search technology is heating up, and OpenAI has jumped into the ring, ready to take on Google. But as they roll out their own search capabilities, it’s worth asking: why is OpenAI doing this, and what does it mean for the rest of us?  

Despite its rapid growth, OpenAI isn’t making a profit yet and is expected to lose on the order of $5 billion this year. This move into search tech seems like a bid to argue for a sustainable business model. By challenging Google, OpenAI is taking on one of the most powerful tech companies with a product that is going one-on-one with the product that made them famous. 

Google’s success was built on PageRank, an algorithm that indexed pages based on how they were referred to in the pages that were linked to them. This understanding of the web’s architecture and how to mine human judgment about how to refer to things delivered great results. Google’s continuous improvements and secrecy about their methods have kept them ahead for years. 

As OpenAI enters this space, the question is what is their long-term goal? They keep adding features to their language models, images, and videos, but it’s unclear what their actual product will be. They launched the most rapidly adopted application in history, but popularity hasn’t mapped onto profitability. Many of the current releases have the feel of “look, we can do this cool thing” rather than “look at how useful this is.” 

It is important to note that, like Google's foray into AI enhanced search, OpenAI’s search isn’t just about finding information; it’s about changing how we interact with and trust online content. Unlike traditional search engines that direct you to documents, OpenAI wants to synthesize information from various sources to give you answers. This could make for a smoother user experience, but it also raises big questions about accuracy and truth. 

The tough part is that OpenAI has to validate the information it provides. Google’s approach has always been to show a range of documents and let users figure out what’s true. OpenAI, however, aims to act as a filter or validator, which is a complex and risky task. Checking information in one document, using the text from other documents, is difficult, especially when there might be conflicts. This opens OpenAI up to a host of legal issues. If they make a mistake, the consequences can be severe—misleading information could lead to bad investments, health risks, or even fatal errors. 

As OpenAI dives into search, they face the huge challenge of building a moat around their business. To date, their successes have been matched almost feature by feature by other tech companies both large and small. The field is crowded with other big names like Anthropic and Meta’s Llama, all pushing for the top spot in AI and language models. Stepping into search isn’t going to protect them, while Google, with its vast resources, established ecosystem, and massive advertising business is still very well protected.  

And, from an impact perspective, it all comes back to a key issue: the role of human judgment in understanding information. Google’s original model, which leaves it up to users to suss out the truth, helps keep critical thinking skills sharp. OpenAI’s attempt to automate this process, (and Google’s as well) while innovative, has the potential of undermining this crucial aspect of human cognition. 

So, OpenAI’s leap into search is a bold move that shows both their ambitions and the big challenges they face. As they push forward, the real question is: can they create a new kind of search without losing the important human element of judgment and critical thinking? 

Kristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program

Back to top