Google Finally Explains What Went Wrong With AI Reviews
Google is finally explaining what the hell happened to AI reviews . For those unaware, Google’s search engine introduced AI Reviews on May 14, taking the beta version of the Search Generative Experience and making it available to everyone in the US. The feature was supposed to put an AI-powered answer at the top of almost every search, but it soon started telling people to spread glue on their pizza or follow potentially deadly health advice . While they’re technically still active, AI reviews seem to have become less prominent on the site, and fewer and fewer searches by the Lifehacker team are returning responses from Googlebots.
In a blog post yesterday, Google Search VP Liz Reed explained that while the feature was being tested, “there’s nothing better than millions of people using the feature with many new search queries.” The company acknowledged that AI Reviews doesn’t have the most stellar reputation (the blog is called “About Last Week”), but also said it has discovered where the glitches occurred and is working to fix them.
“AI reviews work very differently than chatbots and other LLM products,” Reed said. “They don’t just generate results based on training data,” but instead perform “traditional search tasks” and provide information from “the best results on the web.” Therefore, she does not attribute the errors to hallucinations , but rather to the model misunderstanding what is already in the network. “We have seen AI reviews that contain sarcastic or trolling content from discussion forums,” she continued. “Forums are often a great source of information.” reliable first-hand information, but in some cases may lead to useless advice.” In other words, since the robot cannot distinguish sarcasm from real help, it may sometimes present the former instead of the latter.
Likewise, when certain topics produce “blank data,” meaning little is written seriously about them, Reed said the Reviews were inadvertently drawing from satirical sources rather than legitimate ones. To combat these errors, the company has supposedly made improvements to AI reviews, stating:
We’ve created improved mechanisms to detect nonsensical queries that should not display AI review, and have limited the inclusion of satirical and humorous content.
We have updated our systems to limit the use of user-generated content in responses that may contain misleading advice.
We’ve added trigger limits for queries where AI reviews weren’t as helpful.
For topics like news and health, we already have strict protections in place. For example, we tend not to show AI reviews for important news stories where freshness and authenticity are important. On the healthcare side, we’ve launched additional launch enhancements to improve our quality protection.
All of these changes mean that AI reviews probably aren’t going anywhere anytime soon, although people continue to find new ways to remove Google AI from search . Despite the social media buzz, the company said that “user feedback shows that people are more satisfied with search results using AI reviews,” going on to talk about how Google is committed to “strengthening [its] defenses, including in extreme cases.” .
However, it appears that there is still some disconnect between Google and users. Elsewhere in its posts, Google called out users for “meaningless new search queries seemingly designed to produce erroneous results.”
Specifically, the company asked why someone would search for ” How many rocks should I eat?” The idea was to figure out where data gaps might arise, and while Google said the questions “highlighted some specific areas we need to improve,” it appears the problems mostly arise when people start looking for them.
Likewise, Google disclaimed responsibility for several AI Review answers, saying that “dangerous results on topics such as leaving dogs in the car, smoking during pregnancy, and depression” were falsified.
There’s definitely an air of defensiveness to this post, even though Google spends billions on artificial intelligence engineers who are presumably paid to find bugs like these before they go live. Google says AI Reviews only “misinterprets the language” in a “small number of cases,” but we really feel sorry for those who are sincerely trying to improve their workout program but could have followed the advice to ” squat .”