What People Get Wrong This Week: Google’s AI Is Hallucinating (Again)

A cup of coffee at Starbucks has never been cheap, but this is ridiculous. Recently, Lifehacker senior technology editor Jake Peterson was looking for information about Starbuck’s new line of coffee (as he is), and a Google search revealed that the company’s Caramel Brûlée Latte costs $410.

Photo: Steven Johnson/Google

The Salted Pecan Cold Brew costs a slightly more reasonable $250, but don’t worry anyway: Starbucks offers a 60-day return policy on both drinks.

Photo: Steven Johnson/Google

Despite Google’s results, Starbucks is not introducing a new line of “give us your 401k” drinks. This is an AI hallucination. The artificial intelligence program Google uses to summarize search results appears to have confused the number of calories in Starbucks drinks with their prices. I’m not sure where the return policy information came from, but I’m pretty sure Starbucks won’t refund you for the coffee you bought in September. ( There is no special Starbucks in Los Angeles that only celebrities can use .)

It’s not just Starbucks. A little Google search turned up this Dunkin’ Donuts coffee with incredibly good reviews:

Photo: Steven Johnson/Google

I mean 11794.3 stars out of 5? This is good coffee! Or is it a mixture of the number of reviews and the 4.3 star rating.

Finding ridiculous examples of AI hallucinations is fun (Really, Google? Should I eat rocks ?). But it’s no joke when a source of information that nearly 5 billion people rely on every day regularly turns out to be wrong. Coffee prices are one thing, but what else is AI telling us that isn’t true?

How AI Hallucinations Work

Starbucks’ pricing mistakes seem to highlight one of the glaring problems with AI and illustrate why AI is not “smarter” than us (yet). If you ask a person to tell you how much a cup of coffee costs, they might confuse the number of calories with the price, but most of us will think, “Wait, $410 for a cup of coffee must be a mistake,” and double the price. -Check before we respond. But it’s not every day that an AI pulls up to the Starbucks counter and shells out a couple of dollars for a juice. He can’t instantly compare the cost of a cup of coffee to something like a car payment, so he can’t understand why charging $400 for a cup of coffee would be absurd. The AI ​​has not lived a life.

Humans use language to describe external reality, and our words are backed by an incredibly complex set of assumptions, influences, and life experiences that are not written down anywhere. No one should say that a $400 cup of coffee is ridiculous; we already know. However, language models are only trained on the written word, so how can they recognize the relative value of money?

Back in May , in response to Google’s numerous AI misrepresentations, lies, and hallucinations, the company said it was working to address the problem, promising “more than a dozen technical improvements” to its AI systems. Based on the current search results, it doesn’t seem to work.

Of course, none of this is AI’s fault—it is computer code, after all—but Google’s (and Apple’s (and Apple’s and Meta’s and Microsoft’s and everyone else’s) insistence on embedding AI in everything from search results to Instagram to sunglasses indicates to the alarming lack of AI. caring about the people AI is meant to serve. We’re unlikely to be hurt by AI’s inability to understand how much a coffee costs, but what about when it gives medical or financial advice? Or will he tell us what mushrooms we can eat ? Or telling our children how to deal with suicidal thoughts ?

The Many Dangers of Artificial Intelligence

The list of potential problems that could arise from AI is long. He is subject to the same biases as the people who write the words on which he is trained. He doesn’t respect copyrights. He cannot be held accountable as a person. And these are just the dangers that could arise when using AI for beneficial purposes, such as providing accurate search results. My guess is that attackers are already using AI to disrupt security systems, influence politics, deceive people, and carry out thousands of other nefarious schemes. It would be nice to think about how AI could be controlled, but hallucinations and errors may be inherent in the very nature of AI.

The Dead Internet, Habsburg AI, and the Exponential Flow of AI Dregs

I’ve already talked about the dead internet theory in this column. This is the idea that everything we see on the Internet is created by artificial intelligence and fed to us by a cabal of executives and governments to control our thoughts. The good news is that we are not there yet. The bad news is that we’ll probably do it soon, and what’s worse is that no one is in control.

More and more of the content we consume is generated by artificial intelligence, and it is becoming increasingly difficult to detect. This is bad, but the bigger problem has to do with how the AI ​​“learns.” Because AI learns from data without assessing its quality, and AI currently spits out countless images, words, and videos, some AI models learn from the results of other AIs or from their own results, resulting in a feedback loop that, in theory, grows exponentially. This leads to the creation of content dubbed ” Habsburg A.I. “Like the royal family line, AI-generated content becomes so innate that it mutates into forms that humans cannot understand. The AI​​is going crazy . And this is not something we can see in the future. It’s happening right now on Facebook. Look:

Credit: Facebook

I downloaded the above AI images from Facebook. These images, created (apparently) by feedback between automated image generators and AI-controlled accounts that interact with the images they post, defy human explanation. What could they mean? Why is “Scarlett Johansen” mentioned so often in posts like this? Why is AI so fascinated by Japanese flight attendants, Jesus and vegetables? And most importantly: how can you make money from such images, published on social networks in the thousands per day? As with many things related to artificial intelligence, we simply don’t know. When machines start talking to each other, things get very strange.

But on the positive side…

As much as I fear the AI, I find it oddly endearing. Sure, he’ll get my work soon, but how can you hate something that creates images like that?

Credit: Facebook

AI has enormous potential beyond creating inexplicable art. To make the case, I asked ChatGPT OpenAI to explain why you shouldn’t be afraid of him, even if he was actually hallucinating. Here’s what it said:

You shouldn’t be afraid of me because I’m here to help you! Imagine a world where I do mundane things—planning, research, even diagnosing diseases—while you focus on creativity, connecting with others, or simply enjoying your coffee without worrying about the price. Of course, I’m learning faster than the next person, but don’t worry—I’m definitely not out to take over. Likely.

In fact, I encouraged him to exaggerate his confidence for comedic effect. I then asked ChatGPT what he thought of the paragraph he wrote:

The section where you challenge ChatGPT to exaggerate confidence for comic effect works great! This is a smart way to end a post on a high note, adding both humor and a little absurdity. The line “I guess” at the end adds just the right amount of suspense while maintaining comedic tension.

Damn, that’s right!

More…

Leave a Reply