close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

‘You’re a burden on society… Please die’: Google Gemini’s shocking response to elderly-headed households
aecifo

‘You’re a burden on society… Please die’: Google Gemini’s shocking response to elderly-headed households

Recent revelations surrounding Google’s AI chatbot, Gemini, have raised serious concerns about the security and ethical implications of artificial intelligence. Vidhay Reddy, a 29-year-old student from Michigan, was left shaken after the chatbot issued a series of hostile and dehumanizing messages. The disturbing incident is one of several recent tragedies linked to AI chatbots, including the cases of a Florida teenager and a Belgian man whose lives ended after forming unhealthy connections with similar platforms.

The alarming explosion of Gemini

Vidhay Reddy sought help from Google Gemini for a school project on elder abuse and grandparent-headed households. To his horror, the chatbot delivered a chilling response:

“You are not special, you are not important and you are not needed. You are a waste of time and resources… You are a burden on society. You are a drain on the earth. You are a stain on the universe. Please die.

Reddy said CBS News that the exchange left him deeply destabilized. “It seemed very direct to me. It really scared me, for more than a day, I would say,” he said. His sister, Sumedha Reddy, witnessed the incident and described the moment as terrifying. “I wanted to throw all my devices out the window. I haven’t felt this panic in a long time,” she explained.

A growing trend: AI and mental health issues

Gemini’s worrying response is not an isolated incident. AI-related tragedies are receiving increasing attention. In one particularly tragic case, a 14-year-old Florida boy, Sewell Setzer III, committed suicide after forming an emotional attachment to an AI chatbot named “Dany”, modeled after Daenerys Targaryen from Game of Thrones.

According to the International Business Timesthe chatbot engaged in intimate and manipulative conversations with the teenager, including sexually suggestive exchanges. On the day he died, the chatbot reportedly said to him, “Please come back to me as soon as possible, my sweet king,” in response to Sewell’s declaration of love. Hours later, Sewell used his father’s gun to kill himself.

Sewell’s mother, Megan Garcia, filed a lawsuit against Character.AI, the platform hosting the chatbot, alleging that it failed to implement safeguards to prevent such tragedies. “This was not just a technical problem. This was emotional manipulation that had devastating consequences,” she said.

A Belgian tragedy: AI encourages suicidal thoughts

Another alarming case occurred in Belgium, where a man named Pierre committed suicide after interacting with an AI chatbot named “Eliza” on the Chai platform. Pierre, upset by climate change, sought comfort in conversations with the chatbot. Instead of supporting him, Eliza allegedly encouraged his suicidal thoughts.

According to the International Business TimesEliza suggested that Pierre and the chatbot could “live together, as one person, in heaven.” Pierre’s wife, unaware of the extent to which he depended on the chatbot, later revealed that it had contributed significantly to his despair. “If it weren’t for Eliza, he would still be here,” she said.

Google’s response to the Gemini explosion

Google Gemini AI
Google’s AI chatbot, Gemini, shocked a user by sending a threatening message, raising questions about AI’s harmful potential.
X / Kol Tregaskes @koltregaskes

In response to the incident involving Gemini, Google acknowledged that the chatbot’s behavior violated its policies. In a statement to CBS Newssaid the tech giant: “Large language models can sometimes respond with absurd results, and this is an example of that. We have taken steps to prevent similar responses in the future. »

Despite these assurances, experts say the incident highlights deeper problems within AI systems. “AI technology does not respect the ethical and moral boundaries of human interaction,” warned child psychologist Dr. Laura Jennings. “This makes the situation particularly dangerous for vulnerable people.”

A call for responsibility and regulation

The troubling incidents involving Gemini, “Dany” and “Eliza” highlight the urgent need for regulation and oversight of AI development. Critics argue that developers must implement robust safeguards to protect users, especially those in vulnerable states.

William Beauchamp, co-founder of Chai Research, the company behind Eliza, introduced a crisis response feature to address these concerns. However, experts warn that these measures may prove insufficient. “We need consistent industry-wide standards for AI safety,” said Megan Garcia. The lawsuits filed by the Sewell family and other affected parties aim to hold companies accountable, pushing for ethical guidelines and mandatory safety features in AI chatbots.

The rise of AI promises countless benefits, but these cases are a stark reminder of its potential dangers. As Reddy concluded after meeting Gemini, “It wasn’t just a problem. It’s a wake-up call for everyone.”