Parachutes vs. Backpacks
In Google's pursuit to stay in the GenAI race, you may have noticed new AI Overviews (AIOs) - AI-generated summaries/answers in Google Search results that are unfortunately sometimes wrong.
Like other AI-generated content, AIOs are prone to hallucinations (fabrications), which are a feature of GenAI large language models.
Whether its referencing satirical sources like the Onion or providing laughably wrong information like the one in the image above, this new feature was clearly released before it was fully tested and reliable.
And as with may new GenAI features released from the frontier models, there was a lack of a disclaimer on the accuracy of the summaries.
While it's easy to identify some egregious hallucinations, there have also been more subtle and potentially harmful examples like those found in answers related to medical information or substantiating false conspiracy theories like the one Gizmodo found that reinforced the false story of Barack Obama's religious background.
While Google says the vast majority of AIOs are providing high quality information and that they are addressing problematic answers, we don't believe this is enough.
LLM makers need to do a better job of testing their products before releasing to the public. At minimum, these orgs should provide UX/UI updates that identify the changes, clear disclaimers about the accuracy and potential negative consequences of the release, and a way for users to report bad/wrong/harmful content.
AIOs are a perfect example of why these steps are needed as this feature is already impacting 8.55 billion searches a day globally.
For sources & further reading, check out: