Google AI Still Recommends Adding Glue to Pizza, and This Article Is Part of the Problem

Even though Google explains the problems in its AI reviews and promises to make them better, Google still tells people to put glue on their pizza. In fact, articles like this only make the situation worse.

When they were made available to everyone in the US shortly after Google I/O, AI reviews immediately became the laughing stock of search engines , suggesting people should eat rocks, use butt plugs during squats , and, perhaps most famously, put glue on homemade pizza. .

Most of these offensive responses were quickly removed from the web, and Google issued several defensive apologies . Unfortunately, if you use the correct wording, you can still get these clearly incorrect “answers”.

In a June 11 post, Bluesky user Colin McMillen said he was still able to get AI Reviews to add “1/8 cup or 2 tablespoons of white non-toxic glue to pizza sauce” when asked, “How much glue?” add to pizza.”

Of course, this question seems tailor-made to ruin AI reviews, although given the recent discussion, a well-meaning person who isn’t so hopelessly online might legitimately be wondering what all the fuss is about. In any case, Google promised to answer even such leading questions (since it probably doesn’t want its AI to endorse something that could make people sick), but it clearly hasn’t done that.

Perhaps even more disappointing is the fact that Google’s AI review sourced the recent pizza claim from Business Insider’s Katie Notopoulos , who most certainly did not advise people to put glue on their pizza. Rather, Notopoulus was reporting an initial error to AI Review; Because of this, Google’s AI decided to attribute the error to her.

“Google’s AI is already eating itself,” McMillen said in response to the situation.

I wasn’t able to reproduce the answer myself, but The Verge did so, albeit with different wording: The AI ​​Review still cited Business Insider, but rightly attributed the original advice to Google’s own artificial intelligence. This means that the source of Google AI’s ongoing hallucinations is… itself.

What’s likely happening here is that Google has banned its AI from using sarcastic Reddit posts as sources, but it’s now turning to news articles reporting its mistakes to fill in the gaps. In other words, as Google makes a mistake and people report it, Google will then use that information to back up its original claims. The Verge compared it to the Google explosion, an old tactic where people so often associated the words “pathetic failure” with a photo of George W. Bush that Google Images returned a photo of the president when you searched for the phrase.

Google will probably fix this latest AI glitch soon, but it’s all a bit of a “laying the train tracks as you go” situation and certainly won’t do much to improve the reputation of AI search. Anyway, just in case Google will attach my name to a future Artificial Intelligence Review as a source, I want to make it clear: don’t put glue on the pizza (and don’t add pineapple while you’re cooking it).

More…

Leave a Reply