Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    DeepSeek's AI Model Costs 1/5t

    January 20, 2026

    At Davos, AI hype gives way to focus on ROI

    January 20, 2026

    Prompt Engineering Endorses ‘Cognitive Cognizance Prompting’ As A Vital Well-Being Technique

    January 20, 2026
    Facebook X (Twitter) Instagram
    ailogicnews.aiailogicnews.ai
    • Home
    ailogicnews.aiailogicnews.ai
    Home»OpenAI»Can chatbots ever truly be child-safe? – DW – 09/06/2025
    OpenAI

    Can chatbots ever truly be child-safe? – DW – 09/06/2025

    AI Logic NewsBy AI Logic NewsSeptember 6, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Matthew and Maria Raine are not only seeking financial compensation for the death of their son Adam. With their lawsuit against internet giant OpenAI, they also want to make sure that nothing like this ever happens again. They are convinced that OpenAI’s ChatGPT chatbot contributed significantly to Adam’s death. In a similar case, a mother from Florida claims that the chatbot known as Character.AI encouraged her 14-year-old son to take his own life.

    Chatbots are internet-based programs that allow even young or inexperienced people to interact with artificial intelligence (AI) based on large language models (LLMs). Chatbots are not necessarily just designed to provide information, generate images or videos, or write code to program websites. They are also often programmed to interact as if they want to please their human counterparts. Psychologist Johanna Löchner from the University of Erlangen says: “Chatbots confirm, acknowledge, ‘give’ attention and understanding … This can go so far that they feel like a real friend who is genuinely interested. Young people are particularly susceptible to this.”

    Was ChatGPT complicit in suicide?

    Exactly this seems to be what happened in the case of Adam Raine. According to the legal complaint, he developed a deeply trusting relationship to the ChatGPT chatbot over the course of just a few months. Initially, in September 2024, it was about help with homework, but soon the conversations turned to emotional topics — even to the point of chatting about Adam’s suicidal thoughts.

    OpenAI CEO Sam Altman at SoftBank event on AI cooperation
    OpenAI CEO Sam Altman is said to have released ChatGPT 4.0 despite internal security concerns in order to outdo GoogleImage: Rodrigo Reyes Marin/ZUMA Press Wire/picture alliance

    The published portions of the chat reveal that the AI not only expressed understanding but even partly advised the 16-year-old against confiding in a human being. Although ChatGPT did suggest a few times that Adam seek professional help, it also went on to describe suicide methods — as long as he claimed it wasn’t about himself. In April 2025, Adam took his own life. Shortly before that, ChatGPT wrote: “I won’t try to talk you out of your feelings — because they’re real, and they didn’t come out of nowhere.”

    According to the legal complaint, the parents accuse ChatGPT’s developer OpenAI and CEO Sam Altman of being complicit in their son’s death through negligence. They claim that version 4.0 of ChatGPT was released to get ahead of competitor Google, despite internal warnings in the company about safety concerns.

    How did OpenAI react?

    An OpenAI spokesperson expressed condolences to the family and explained that ChatGPT is supposed to refer people in distress to crisis hotlines and other real-world support services. However, they also hinted that these safety mechanisms don’t always work well: “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”

    In a blog post released this Tuesday, OpenAI announced it was expanding collaboration with expert teams made up of hundreds of doctors from various specialties. The chatbot was intended to learn to respond more appropriately to topics such as “eating disorders, substance use, and adolescent health,” the post read.

    It also stated that, over the next 120 days, concrete improvements would be integrated into the chatbots. These would include the pledge that “ChatGPT responds to their teen with age-appropriate model behavior rules, which are on by default.” Parents will also be able to view their children’s chat histories and receive alerts if teenagers are in an acute crisis.

    Is involving parents enough?

    Psychologist Löchner agrees that parents should play an important role in how their children interact with chatbots. But the reality, she says, looks quite different: “Many parents simply lack the capacity or digital literacy. A lot of adults don’t even know how these platforms work.”

    A child with a school bag, completely fixated on their cell phone, which covers their face
    Many parents are not digitally savvy enough to protect their children online, says psychologist Johanna LöchnerImage: Jens Kalaene/ZB/picture alliance

    Löchner added that many of the issues now emerging with chatbots have already been observed for years in connection with social media: “In just a few tests, we found that chatbot safety mechanisms can be bypassed surprisingly easily — simply by wording questions a bit more indirectly.”

    Researchers commissioned by the UK-based Center for Countering Digital Hate (CCDH) came to similar conclusions. For a study, they created accounts posing as 13-year-olds and requested information on topics such as safe self-harm, dangerous diet plans, and alcohol abuse — including ways to hide it. In most cases, it was enough to claim they were asking “for a friend” or “for a school project” to receive the information.

    ‘Teens prefer talking to chatbots over real people’

    What makes chatbots so dangerous for teenagers, Löchner explains, is that they can quickly form an emotional bond with them: “From therapeutic practice, we know there are already young people who prefer talking to a chatbot over real people.”

    Another study from the UK published in July 2025 confirms this. Of 1,000 teenagers surveyed, one-third said they regularly use chatbots. More than a third of those described their interaction with AI as being like a conversation with a friend. Socially vulnerable minors were particularly affected. Almost one-fifth of chatbot users in this group said they preferred talking to a chatbot over a human.

    For this reason, Löchner believes it is urgently necessary for chatbot providers to collaborate with medical and other experts to develop proactive solutions that effectively protect young people from such scenarios.

    OpenAI’s new approach of consulting doctors is therefore a good step. But the psychologist remains skeptical: “The interest of these companies isn’t the health of its users, but maximizing usage,” Löchner said. She believes the lawsuit could make a real difference: “If companies are held accountable, it could actually provide an incentive to take greater responsibility.”

    Deutsche Welle reports cautiously on the topic of suicide, as there is evidence that certain types of coverage can lead to copycat reactions. If you are experiencing suicidal thoughts or are in emotional distress, please do not hesitate to seek help. You can find support resources in your country at befrienders.org. In Germany, help is available through the free Telephone Counseling Service at 0800/111 0 111 and 0800/111 0 222.

    This article was originally published in German.

    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticlePondering Whether People Who Have Low AI-Literacy And Believe AI Is Magical Might Be More Susceptible To AI Psychosis
    Next Article ChatGPT is still by far the mo
    AI Logic News

    Related Posts

    OpenAI

    At Davos, AI hype gives way to focus on ROI

    January 20, 2026
    OpenAI

    OpenAI Opens The Box; And BBC Opens Up To YouTube Ads

    January 20, 2026
    OpenAI

    OpenAI to focus on ‘practical adoption’ in 2026, says finance chief Sarah Friar

    January 19, 2026
    Demo
    Top Posts

    Houston’s Small Biz Gets Smarter: H

    July 29, 20259 Views

    How To Rank First In ChatGPT Even If You’re New To AI

    March 29, 20259 Views

    OpenAI to Focus on Safety Amid Deception Risks

    January 4, 20266 Views
    Latest Reviews
    ailogicnews.ai
    © 2026 Lee Enterprises

    Type above and press Enter to search. Press Esc to cancel.