“AI Reviews” Are a Mess and Google Seems to Know It
In its Google I/O keynote earlier this month , Google made big promises about artificial intelligence in search , saying that users will soon be able to “let Google Google for you.” The feature, called AI Reviews, launched earlier this month. Result? The search giant spent Memorial Day weekend scrubbing the internet of AI answers.
Since Google’s AI search became available to everyone in the US on May 14, AI reviews have suggested users adding glue to pizza sauce , eating rocks , and using a “squat plug” while working out (you can guess what this is about) in the last paragraph). ).
While some examples circulating on social media were clearly photoshopped for fun, others were confirmed by the Lifehacker team – Google suggested I specifically use Elmer’s glue on my pizza. Unfortunately, if you try to search for these answers now, you’ll likely see the disclaimer ” AI Review is not available for this search ” instead.
Why are Google AI reviews like this?
This isn’t the first time Google’s AI-powered search queries have misled users. When the beta version of the AI review, known as the Search Generative Experience, was released in March , users reported that the AI was sending them to sites known to distribute malware and spam .
What causes these problems? Well, based on some of the answers, it seems like Google’s AI doesn’t understand jokes. In particular, the AI is unable to distinguish between a sarcastic post and a genuine one, and seems to enjoy crawling Reddit for answers. If you’ve ever spent any time on Reddit, you can see what a bad combination this makes.
After some searching , users discovered that the source of the AI ”glue in the pizza” tip was an 11-year-old post from Reddit user “fucksmith.” Likewise, the use of “squat plugs” is an old joke on Reddit’s exercise forums (Lifehacker senior health editor Beth Skwarecki exposes this particular piece of unintentional misinformation here ).
These are just a few examples of the problems with AI reviews, and another – the AI’s tendency to quote satirical articles from The Onion as gospel (no, geologists don’t actually recommend eating one small rock a day) particularly illustrates the problem: The Internet is full of jokes, repeat who with a straight face can be extremely bad advice, and that’s exactly what AI Reviews does.
Google search results for artificial intelligence at least clearly point to most of their claims (though it took some research to figure out the origins of the advice about using glue on pizza). But unless you click the link to read the full article, you’ll have to take the AI’s word for their accuracy, which can be problematic if those claims are the first thing you see in Search, at the top of the results page and in big, bold text. As you noticed in Beth’s examples of bad school work, the words “some say” do a lot of the heavy lifting in these responses.
Google is abandoning artificial intelligence reviews?
When reviews of AI do something wrong, they are mostly laughable and nothing more. But when it comes to prescriptions or medical advice, things can get dangerous. Take this outdated advice on how to survive a rattlesnake bite , or these potentially deadly mushroom identification tips , which the search engine also provided to Beth.
Google has attempted to avoid liability for any inaccuracies by labeling the end of its AI reviews with a notice that says “Generative AI is experimental” (noticeably smaller text), although it’s unclear whether this would hold up in court if someone was harmed by an AI review. assumption.
There are many more examples of AI Review getting it wrong floating around the internet, from Air Bud confusing the true story to calling Barack Obama a Muslim , but suffice it to say that the first thing you see in a Google search is now even less reliable than it was when all you had to worry about was sponsored advertising.
Let’s say you even see this: Oddly enough, and perhaps in response to the backlash, AI reviews currently seem to be much less prominent in search results than they were last week. While writing this article, I tried to search for general tips and facts, such as “how to make banana pudding” or “name the last three US Presidents,” which AI Reviews answered with confidence and without errors in previous searches. For about two dozen searches, I saw no reviews, which struck me as suspicious given an email Google spokeswoman Megan Farnsworth sent to The Verge indicating that the company was “taking swift action” to remove some offensive AI responses.
Google AI reviews don’t work in search labs
Perhaps Google is simply being overly cautious, or perhaps the company is paying attention to how popular anti-AI hacks have become, such as clicking on a new web search filter or adding udm=14 to the end of a search URL.
Either way, it looks like something has changed. In the top left (on mobile) or top right (on desktop) of your browser search, you should now see something that looks like a beaker. Click on it and you’ll be taken tothe Search Labs page , where you’ll see a prominent card advertising “AI Reviews” (if you don’t see the glass, sign up for the Search Labs at the link above). You can click on this card to see a toggle that you can turn off, but since the toggle doesn’t actually affect the search as a whole, we care about what’s underneath it.
Here you’ll find a demo of the AI review, with a big, colorful “Try Example” button that will display some simple answers that show the feature in its best light. Below this button are three more “Try” buttons, except two of them no longer lead to the AI review. When I clicked on them, I just saw the normal search results page with example suggestions added to my search bar, but Gemini didn’t respond to them.
If even Google itself isn’t confident in its carefully chosen AI review examples, it’s probably a good sign that they’re at least not the first thing users should see when asking Google a question.
Detractors might say that AI Reviews are simply a logical next step from the knowledge panels the company already uses, where Search directly cites media outlets without having to redirect users to the original web page, but the knowledge panels themselves are not without controversy .
Does the AI feel lucky?
On May 14, the same day that AI Reviews was published, Google spokesman Danny Sullivan proudly announced his support for Web Filter, another new feature that debuted alongside AI Reviews, but with much less fanfare. The web filter disables both AI and knowledge panels and is at the heart of the popular udm=14 hack. It turns out that some users just want to see the classic ten blue links.
This is all reminiscent of a debate that took place just over a decade ago, when Google dramatically cut back on the number of “I’m Feeling Lucky” buttons . This unusual feature worked as a prototype for AI reviews and knowledge panels, trusting the algorithm so much that Google’s first search result was correct that it simply sent users straight to it, rather than letting them check the results themselves.
Opportunities for searches to be exploited by malware or misinformation were just as common back then, but the real reason I’m Feeling Lucky died was that no one took advantage of it. At only 1% of searches, the button simply wasn’t worth the millions of dollars in ad revenue that Google was losing by redirecting users away from the search results page before they had a chance to see any ads. (You can still use the phrase “I’m feeling lucky,” but only on desktop and only if you scroll down past autocompleted search suggestions.)
It’s unlikely that AI Reviews will go the way of I’m Feeling Lucky anytime soon—the company spent a lot of money on AI , and I’m Feeling Lucky only disappeared in 2010. But at least for now, it seems to be getting as much attention on the site as Google’s most forgotten feature. The fact that users don’t respond to these AI-generated suggestions suggests that you don’t actually want Google to do your Google searches for you.