close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

14-year-old’s suicide was caused by AI chatbot, lawsuit says. Here’s how parents can protect their children from new technologies
aecifo

14-year-old’s suicide was caused by AI chatbot, lawsuit says. Here’s how parents can protect their children from new technologies

The mother of a 14-year-old Florida boy is suing an AI chatbot company after the death of her son, Sewell Setzer III. suicide– something she said was motivated by her relationship with an AI robot.

“Megan Garcia seeks to stop C.AI from doing to another child what she did to her own,” the 93-page wrongful death document reads. trial which was filed this week in a U.S. District Court in Orlando against Character.AI, its founders and Google.

Technological law project in matters of justice Director Meetali Jain, who represents Garcia, said in a press release about the case: “We are now all familiar with the dangers posed by unregulated platforms developed by unscrupulous technology companies, particularly for children. But the wrongdoings revealed in this case are new, fresh and, honestly, terrifying. In the case of Character.AI, the deception is intentional and the platform itself is the predator.

Character.AI released a declaration vianoting: “We are heartbroken by the tragic loss of one of our users and would like to express our deepest condolences to the family. As a company, we take the security of our users very seriously and we continue to add new security features which you can read about here: https://blog.character.ai/community-safety-updates/…“.

In the lawsuit, Garcia alleges that Sewell, who killed himself in February, was lured into a addictiveharmful technology with no protections in place, leading to an extreme personality change in the boy, who seemed to prefer the robot over other real-life connections. Her mother alleges “abusive and sexual interactions” took place over a 10-month period. The boy committed suicide after the robot told him: “Please come back to me as soon as possible, my love.” »

Friday, New York Times journalist Kevin Roose discussed the situation on his Hard Fork Podcastbroadcasting an excerpt from an interview he did with Garcia for his article who told his story. Garcia only became aware of the extent of the robot connection after her son’s death, when she saw all the messages. In fact, she told Roose, when she noticed Sewell often getting sucked into his phone, she asked him what he was doing and who he was talking to. He explained that it was “‘just an AI robot… not a person,'” she recalled, adding, “I felt relieved, like, OK, it’s not a person , it’s like one of his little games.” Garcia hasn’t fully understood the potential emotional power of a robot, and she’s far from alone.

“It’s not on anyone’s radar,” said Robbie Torney, chief of staff to the CEO of Common Sense Media and lead author of a new guide on AI companions for parents, who constantly struggle to keep up confusing new technology and to create boundaries for the safety of their children.

But AI companions, Torney points out, differ from, say, a service desk chatbot you use when trying to get help from a bank. “They are designed to perform tasks or respond to requests,” he explains. “Something like character AI is what we call a companion and is designed to try to form a relationship, or simulate a relationship, with a user. And that’s a very different use case that I think we need to make parents aware of. This is evident in Garcia’s trial, which includes frightening, sexual and realistic text exchanges between his son and the robot.

It’s especially important for parents of teens to sound the alarm about AI companions, Torney says, because teens, and especially male teens, are particularly likely to be overly reliant on technology.

Below, what parents need to know.

What are AI companions and why do kids use them?

According to the new Ultimate Parents’ Guide to AI Companions and Relationships by Common Sense Media, created in collaboration with mental health professionals from Stanford Think TankAI companions constitute “a new category of technology that goes beyond simple chatbots”. They are specifically designed to, among other things, “simulate emotional connections and close relationships with users, remember personal details from past conversations, act as mentors and friends, mimic human emotion and empathy and “more easily agree with the user than typical AI chatbots,” according to the guide.

Popular platforms not only include Character.ai, which allows its more than 20 million users to create and then chat with text companions; Replika, which offers 3D text or animated companions for friendship or romance; and others, including Kindroid and Nomi.

Children are drawn to them for a variety of reasons, from nonjudgmental listening and 24-hour availability to emotional support and an escape from real-world social pressures.

Who is at risk and what are the concerns?

Those most at risk, Common Sense Media warns, are adolescents, particularly those suffering from “depression, anxiety, social difficulties or isolation”, as well as men, with young people going through big changes in their lives. life and anyone lacking support in the real world. .

This last point was particularly troubling for Raffaele Ciriello, senior lecturer in business information systems at the University of Sydney Business School, who did some research how “emotional” AI poses a challenge to the human essence. “Our research reveals a (de)humanization paradox: by humanizing AI agents, we may inadvertently dehumanize ourselves, leading to ontological blurring in human-AI interactions.” In other words, Ciriello writes in a recent opinion piece for The conversation with doctoral student Angelina Ying Chen, “Users can become deeply emotionally invested if they believe their AI companion truly understands them.”

Another studythis one from the University of Cambridge and focused on children, found that AI chatbots exhibit an “empathy deficit” that exposes young users, who tend to treat these companions as “realistic, quasi-confidants humans”, at particular risk of harm. .

For this reason, Common Sense Media highlights a list of potential risks, including that companions can be used to avoid real human relationships, may pose particular problems for people with mental or behavioral problems, may intensify loneliness or isolation, lead to inappropriate behavior. sexual content can be addictive and tend to agree with users – a frightening reality for those suffering from “suicide, psychosis or mania”.

How to spot red flags

Parents should look for the following warning signs, according to the guide:

  • Prefer interaction with an AI companion over real friendships

  • Spend hours alone talking to your companion

  • Emotional distress when unable to access companion

  • Sharing deeply personal information or secrets

  • Develop romantic feelings for the AI ​​companion

  • Decline in grades or school participation

  • Withdrawal from social/family activities and friendships

  • Loss of interest in previous hobbies

  • Changes in sleep habits

  • Discuss issues exclusively with the AI ​​companion

Consider seeking professional help for your child, Common Sense Media points out, if you notice that he or she is withdrawing from real people in favor of AI, showing new or worsening signs of depression or anxiety, becomes too defensive about using AI companions, shows major changes in behavior. or mood, or express thoughts of self-harm.

How to keep your child safe

  • Set limits: Set specific times for AI companion use and do not allow unsupervised or unrestricted access.

  • Spend time offline: Encourage friendships and real-world activities.

  • Check in regularly: Monitor the content of the chatbot, as well as your child’s level of emotional attachment.

  • Talk about it: Maintain open, non-judgmental communication about experiences with AI, while keeping an eye out for red flags.

“If parents hear their kids say, ‘Hey, I’m talking to an AI chatbot,’ that’s really an opportunity to lean in and take that information in, and not think, ‘Oh, okay , you are not talking to a chatbot. nobody,” says Torney. Instead, he says, it’s an opportunity to learn more, assess the situation and stay vigilant. “Try to listen with compassion and empathy and not think that just because it’s not a person that doesn’t mean you’re any safer,” he says, “or that you don’t have to worry.”

If you need immediate mental health support, contact 988 Suicide and Crisis Lifebuoy.

Learn more about children and social media:

This story was originally featured on Fortune.com