Skip to main content

The Risk of Google’s Shift from Search Engine to Answer Machine

Search engine

In an attempt to keep pace with Microsoft and OpenAI, Google has introduced AI Overviews—a feature leveraging language models to communicate search results. This technology retrieves top search results and uses them to generate answers to queries. However, this approach fundamentally transforms Google's role from providing search results to offering what it deems as answers, raising significant concerns about the integrity of information we receive. 

Traditionally, Google's search engine has excelled in providing documents that align with users' queries, leaving it up to the user to discern the truth. This model empowered users to evaluate the information critically. However, AI Overviews shifts this dynamic. Instead of presenting a range of documents for users to assess, it synthesizes information and delivers the result as an answer. This pivot from facilitating search to providing answers undermines users' ability to apply critical evaluation skills and places undue trust in the algorithm's ability to determine the truth. 

Google asserts that it will continue to show links to source documents, but the likelihood of users clicking on these links diminishes once a synthesized answer is presented. This change nudges users towards passive acceptance of information, eroding their ability to engage in critical thinking and independent verification. 

Moreover, Google has always been a search powerhouse, not an arbiter of truth. The company's mission has been to help users find information, not to determine its veracity. Its strength has been the ability to provide access to documents relevant to a user’s query. It never claimed to be a purveyor of truth.  

This distinction is crucial. Search results often include content from a variety of sources, including satire, humor, or outright misinformation. For instance, AI Overviews has generated bizarre suggestions, such as recommending the consumption of small rocks or the use of glue on pizza, and incorrectly stating that Barack Obama was the first Muslim president. These results come from documents that are, in fact, relevant to the queries that gave rise to them. They just are not true. 

Google has always tried to optimize for accuracy, ensuring that the documents match the query's intent. However, accuracy in search does not equate to truth. Accurate search results simply mean that the retrieved documents are relevant to the query, not that they are factually correct. Truth is a more elusive and complex metric, often requiring careful consideration of context, source credibility, and corroboration. 

The challenge lies in the inherent difficulty of discerning truth, a task traditionally left to users. Transitioning from presenting search results to providing definitive answers demands a level of precision and responsibility that is not what Google has ever done. The complexity of this shift cannot be overstated. It is not a matter of fine-tuning algorithms but a fundamental change in Google's role and responsibilities. 

Google's endeavor to provide answers rather than search results risks conflating popularity and accuracy with truth. Just because information is frequently cited or aligns well with a query does not make it true. Evaluating truth requires a nuanced approach, often involving multiple sources and critical analysis—a process that cannot be fully automated. 

This situation places Google in a precarious position. The company must navigate the delicate balance between providing useful information and inadvertently becoming a curator of truth. Google's shift from facilitating search to providing answers represents a profound change with far-reaching implications for our ability to discern truth in an increasingly complex information landscape. 

As we rely more on AI-driven answers, we risk losing our ability to engage with information critically. 

Kristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program

Back to top