Google’s AI Search Blunders: What Happened and What’s Next
Introduction
When Google’s new AI search feature produced bizarre and misleading answers, the company initially downplayed the issues. However, Liz Reid, Google’s head of search, later acknowledged the problems and outlined steps for improvement.
Viral AI Mistakes
Two of the most viral and incorrect AI-generated results included:
- Eating Rocks: Google’s AI suggested that eating rocks could be beneficial, based on a satirical article from The Onion.
- Glue on Pizza: The AI recommended using non-toxic glue to thicken pizza sauce, misinterpreting sarcastic content from discussion forums.
Understanding the Errors
Reid explained that the rock-eating suggestion stemmed from a lack of reliable sources on the topic, leading the AI to misinterpret a satirical piece as factual. The glue-on-pizza error was attributed to the AI’s failure to recognize sarcasm in forum posts.
The Importance of Context
Reid emphasized that judging Google’s new search feature based on viral screenshots is unfair. She noted that extensive testing was conducted before the launch, and data shows users value AI Overviews.
The Role of User Behavior
Reid pointed out that some errors were the result of users intentionally trying to produce erroneous results. She stated, “There’s nothing quite like having millions of people using the feature with many novel searches.”
Fake Screenshots and Misleading Information
Google claims some viral screenshots were fake. For instance, a widely viewed post on X suggested a cockroach could live in a human body, but the format didn’t match actual AI Overviews. The New York Times also issued a correction for reporting that AI Overviews suggested dangerous actions, which were actually dark memes.
Technical Improvements
Reid mentioned that Google made over a dozen technical improvements to the AI Overviews, including:
- Better detection of nonsensical queries
- Reduced reliance on user-generated content from sites like Reddit
- Offering AI Overviews less frequently when not helpful
- Strengthening guardrails on important topics like health
Conclusion
Google will continue to monitor user feedback and adjust the AI search features as needed, without significantly rolling back the AI summaries.
1 Comment
Just can’t trust AI for everything, can we?