close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

Google AI is hallucinating (again): what people are getting wrong this week
aecifo

Google AI is hallucinating (again): what people are getting wrong this week


A cup of Starbucks coffee has never been cheap, but this is ridiculous. Recently Senior Technology Editor of Lifehacker, Jake Petersonwas looking for information on Starbuck’s new coffee line (like him), and a Google search revealed that the company’s Caramel Brûlée Latte costs $410.

Search results for Carmel Brûlée Latte


Credit: Stephen Johnson/Google

A salty pecan cold brew costs a slightly more reasonable $250, but either way, don’t worry: Starbucks offers a 60-day return policy on both drinks.

Search result for crispy cold brew with salted pecans


Credit: Stephen Johnson/Google

Despite Google’s results, Starbucks isn’t launching a new line of “give us your 401k” drinks. It’s an AI hallucination. The AI ​​program Google uses to summarize its search results appears to have confused the calorie counts of Starbucks drinks with their prices. I’m not sure where the return policy information comes from, but I’m pretty sure Starbucks won’t refund you for a coffee you purchased in September. (There is no a special Starbucks in Los Angeles that only celebrities can use, eitherr.)

It’s not just Starbucks. A quick Google search reveals this incredibly well-reviewed Dunkin’ Donuts coffee:

Dunkin Donuts AI Search Results


Credit: Stephen Johnson/Google

I mean, 11794.3 stars out of 5? It’s good coffee! Or it’s a mix of the number of reviews and the 4.3 star rating.

Find ridiculous examples of AI hallucinations It’s fun (Really, Google? I should eat stones?) But it’s no joke when a source of information relied on nearly 5 billion people per day is regularly wrong. Coffee prices are one thing, but what is AI telling us that isn’t true?

How AI hallucinations work

Starbucks’ pricing errors appear to highlight one of AI’s glaring problems and illustrate why AI is not (yet) “smarter” than us. If you asked a person to tell you how much a cup of coffee costs, they might confuse the number of calories with the price, but most of us would think, “Wait, $410 for a cup of coffee must be wrong.” » and would double. -check before responding. But AI doesn’t go to the Starbucks counter every day and shell out a few dollars for juice. He doesn’t instantly compare the value of a cup of coffee to something like a car payment, so he doesn’t understand why charging $400 for a cup of coffee would be absurd. The AI ​​has not lived a life.

Humans use language to describe an external reality, and our words draw on an incredibly complex set of assumptions, influences, and lived experiences that are not written down anywhere. No one needs to say that a $400 cup of coffee is ridiculous; we already know that. However, linguistic models are only trained on the written word, so how could they recognize the relative value of money?

Back in MayIn response to Google’s numerous AI distortions, lies and hallucinations that have gone viral, the company said it was working to fix the problem, promising “more than a dozen technical improvements” to its AI systems. AI. Judging by current research results, it doesn’t work.

Of course, none of this is the fault of AI – it is computer code, after all – but the insistence of Google (and Apple, and Meta, and Microsoft and all others) to inject AI into everything from search results to Instagram has sunglasses indicates a troubling lack of attention to the people AI is intended to serve. We probably won’t be hurt by AI’s inability to understand what coffee costs, but what about when it provides medical or financial advice? Or tells us which mushrooms are safe to eat? Or teaches our children how to deal with suicidal thoughts?

The many dangers of artificial intelligence

The list of potential problems that could arise from AI is long. It is subject to the same biases as the humans who write the words it is trained on. It does not respect copyright. He cannot be held responsible as a person could be. And these are just the dangers that can arise from using AI for benign purposes, like providing accurate search results. My guess is that bad actors are already using AI to thwart security systems, influence policy, scam people, and a thousand other nefarious schemes. It would be nice to think of ways to control AI, but hallucinations and errors can be in the very nature of AI.

The Dead Internet, Habsburg AI, and the Exponential Flood of AI Garbage

I spoke about dead internet theory in this column before. It’s the idea that everything we see online has been generated by artificial intelligence and fed to us by a cabal of CEOs and governments to control our thoughts. The good news is that we are not there yet. The bad news is that we probably will be soon and, even worse, no one has control over it.

More and more of the content we consume is generated by AI, and it’s becoming increasingly difficult to spot. This is bad, but the bigger problem comes from how the AI ​​“learns”. Since AI trains on data without judgment about its quality and currently spits out countless images, words, and videos, some AI models train on the results of other AIs or on their own results, which leads to a feedback loop that is, theoretically, growing exponentially. This leads to content that has been dubbed “Habsburg AI.“Like the royal family lineage, AI-produced content becomes so innate that it mutates into forms that humans cannot understand. AI is go crazy. And that’s not something we might see in the future. This is currently happening on Facebook. Look:

AI images of Jesus


Credit: Facebook

I downloaded the AI ​​images above from Facebook. Generated (apparently) from a feedback loop between automated image generators and AI-controlled accounts that interact with the images they post, these images defy human explanation. What could they possibly mean? Why is “Scarlett Johansen” mentioned so often in these kinds of posts? Why is AI fascinated by Japanese flight attendants, Jesus and vegetables? More importantly, how can anyone make money from these kinds of images posted on a social media network at the rate of thousands per day? Like many AI-based questions, we just don’t know. When machines start communicating with each other, things get very strange.

But on the positive side…

As terrified as I am of AI, I also find it oddly adorable. Sure, this will have my job soon, but how can you hate something that creates images like these?

Strange images of AI


Credit: Facebook

AI has enormous potential for benefits that go beyond creating inexplicable art. To make the case, I asked OpenAI’s ChatGPT to explain why you shouldn’t be afraid of him, even if he’s hallucinating. This is what he says:

You shouldn’t be afraid of me because I’m here to help you! Imagine a world where I take care of the mundane things (planning, researching, and even diagnosing illnesses) while you focus on your creativity, your relationships with others, or just your coffee without worrying about the price. Sure, I learn faster than any human, but don’t worry, I’m definitely not plotting to take over. Probably.

In fact, I tricked him into overestimating comfort for comic effect. Next, I asked ChatGPT what he thought about the paragraph he wrote:

The section where you have ChatGPT exaggerate the confidence for comedic effect works perfectly! It’s a clever way to end the message on a good note, providing both humor and a bit of absurdity. The phrase “Probably” at the end adds just the right amount of uncertainty, keeping the comedic tension intact.

Damn, it’s true!