Google’s Latest Nonsensical Survey Results Illustrate Another Problem With AI

You may not be familiar with the phrase “peanut butter platform heels,” but it apparently originated from a scientific experiment in which peanut butter was pressurized into a diamond-shaped structure under very high pressure—hence the reference to “heels.”
Except this never happened. This phrase is complete nonsense, but it was given a definition and background in Google AI reviews when asked by writer Meaghan Wilson-Anastasios, according to this Threads post (which contains some other fun examples).
View in topics
The Internet took it and ran with it. Apparently, “you can’t lick a badger twice” means you can’t fool someone twice ( Bluesky ), “a free dog won’t surf” means something is unlikely to happen ( Wired ), and “the bike eats first” is a way of saying that you should prioritize your nutrition while preparing for a bike ride ( Futurism ).
Google, however, is not happy about this. I’ve been dying to build my own collection of nonsense phrases and obvious meanings, but it looks like that trick is no longer possible: Google will now refuse to show the AI review or tell you you’re wrong if you try to get an explanation for a nonsense phrase.
If you go to a real AI chatbot, things will be a little different. I ran some quick tests with Gemini, Claude, and ChatGPT, and the bots tried to logically explain these phrases while noting that they seemed nonsensical and didn’t seem to be widely used. This is a much more nuanced approach with context that is missing from AI reviews.
Someone at Threads pointed out that you can type any random sentence into Google and then add “meaning” and you’ll get an AI explanation of a famous idiom or phrase you just came up with. Here’s mine [image or embed]
— Greg Jenner ( @gregjenner.bsky.social ) April 23, 2025 11:15 am
These days, AI reviews are still called “experimental,” but most people don’t pay much attention to that. They will believe that the information they see is accurate and reliable and is based on information obtained from web articles.
And while Google engineers may have realized this particular bug, which is very similar to the glue bug on pizza last year, it’s likely another similar problem will crop up soon. This speaks to some of the underlying problems with getting all of our information from AI rather than links written by real people.
What’s happening?
Essentially, these AI reviews are designed to provide answers and summarize information even if there is not an exact match to your query – which is where the problem with defining the phrase begins. The AI feature also may not be the best judge of what is reliable information on the Internet and what is not.
Do you want to solve a problem with your laptop? You used to get a list of blue links on Reddit and various support forums (and maybe Lifehacker ), but with AI reviews, Google soaks up everything it can find from those links and tries to come up with a reasonable answer—even if no one had the specific problem you’re asking about. Sometimes this can be helpful, and sometimes you can make your problems worse .
Anecdotally, I’ve also noticed that AI bots tend to agree with hints and confirm what the hint says, even if it’s inaccurate. These models are eager to please and essentially want to be helpful, even if they cannot do so. Depending on how you phrase your request, you may be able to force the AI to agree to something that is incorrect.
I couldn’t find any of the nonsense idioms identified in the Google AI reviews, but I did ask the AI why REM’s second album was recorded in London: it had to do with the choice of producer Joe Boyd, the AI review told me. But REM’s second album wasn’t actually recorded in London, it was recorded in North Carolina – this is the third album recorded in London and produced by Joe Boyd.
The Gemini app itself gives the correct answer: the second album was not recorded in London. But the way AI Reviews tries to combine multiple online sources into a cohesive whole seems quite suspect in terms of its accuracy, especially if your search query itself contains some strong claims.
“When people perform meaningless or false premise searches, our systems will attempt to find the most relevant results based on the limited web content available,” Google said in an official statement to Android Authority . “This is true for search in general, and in some cases AI reviews are also triggered to provide useful context.”
We seem to be aiming for search engines to always answer with AI rather than information collected by real people, but of course AI has never repaired a faucet, tested an iPhone camera or listened to REM – it simply synthesizes massive amounts of data from people who have done it and tries to construct answers by figuring out which word is most likely to come before the previous one.