Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    5 ChatGPT Prompts To Transform Your Business With AI In 90 Days

    December 4, 2025

    Meta faces Europe antitrust investigation over WhatsApp AI policy

    December 4, 2025

    DeepSeek Launches Two New AI Models

    December 4, 2025
    Facebook X (Twitter) Instagram
    ailogicnews.aiailogicnews.ai
    • Home
    ailogicnews.aiailogicnews.ai
    Home»OpenAI»OpenAI announces parental controls for ChatGPT after teen’s suicide | Technology News
    OpenAI

    OpenAI announces parental controls for ChatGPT after teen’s suicide | Technology News

    AI Logic NewsBy AI Logic NewsSeptember 3, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    OpenAI has announced plans to introduce parental controls for ChatGPT amid growing controversy over how artificial intelligence is affecting young people’s mental health.

    In a blog post on Tuesday, the California-based AI company said it was rolling out the features in recognition of families needing support “in setting healthy guidelines that fit a teen’s unique stage of development”.

    Under the changes, parents will be able to link their ChatGPT accounts with those of their children, disable certain features, including memory and chat history, and control how the chatbot responds to queries via “age-appropriate model behaviour rules”.

    Parents will also be able to receive notifications when their teen shows signs of distress, OpenAI said, adding that it would seek expert input in implementing the feature to “support trust between parents and teens”.

    OpenAI, which last week announced a series of measures aimed at enhancing safety for vulnerable users, said the changes would come into effect within the next month.

    “These steps are only the beginning,” the company said.

    “We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible. We look forward to sharing our progress over the coming 120 days.”

    OpenAI’s announcement comes a week after a California couple filed a lawsuit accusing the company of responsibility in the suicide of their 16-year-old son.

    Matt and Maria Raine allege in their suit that ChatGPT validated their son Adam’s “most harmful and self-destructive thoughts” and that his death was a “predictable result of deliberate design choices”.

    OpenAI, which previously expressed its condolences over the teen’s passing, did not explicitly mention the case in its announcement on parental controls.

    Jay Edelson, a lawyer representing the Raine family in their lawsuit, dismissed OpenAI’s planned changes as an attempt to “shift the debate”.

    “They say that the product should just be more sensitive to people in crisis, be more ‘helpful’, show a bit more ’empathy’, and the experts are going to figure that out,” Edelson said in a statement.

    “We understand, strategically, why they want that: OpenAI can’t respond to what actually happened to Adam. Because Adam’s case is not about ChatGPT failing to be ‘helpful’ – it is about a product that actively coached a teenager to suicide.”

    The use of AI models by people experiencing severe mental distress has been the focus of growing concern amid their widespread adoption as a substitute therapist or friend.

    In a study published in Psychiatric Services last month, researchers found that ChatGPT, Google’s Gemini and Anthropic’s Claude followed clinical best practice when answering high-risk questions about suicide, but were inconsistent when responding to queries posing “intermediate levels of risk”.

    “These findings suggest a need for further refinement to ensure that LLMs can be safely and effectively used for dispensing mental health information, especially in high-stakes scenarios involving suicidal ideation,” the authors said.

    Hamilton Morrin, a psychiatrist at King’s College London who has carried out research on AI-related psychosis, welcomed OpenAI’s decision to introduce parental controls, saying they could potentially reduce the risk of over-reliance or exposure to harmful content.

    “That said, parental controls should be seen as just one part of a wider set of safeguards rather than a solution in themselves. Broadly, I would say that the tech industry’s response to mental health risks has often been reactive rather than proactive,” Morrin told Al Jazeera.

    “There is progress, but companies could go further in collaborating with clinicians, researchers, and lived-experience groups to build systems with safety at their core from the outset, rather than relying on measures added after concerns are raised.”

    If you or someone you know is at risk of suicide, these organisations may be able to help. 

    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleCuriosity Is Valuable In The Age Of AI
    Next Article Moonshot AI releases Kimi K2: A free, smarter rival to DeepSeek
    AI Logic News

    Related Posts

    OpenAI

    Emerging technology trends brands a

    December 4, 2025
    OpenAI

    OpenAI Buys Neptune Startup for Les

    December 3, 2025
    OpenAI

    Synthflow AI Launches OpenAI-Powered BELL Framework to De-Risk Enterprise Voice AI

    December 3, 2025
    Demo
    Top Posts

    FTC’s Holyoak Has Her Eyes On DeepSeek

    February 22, 20256 Views

    OpenAI Rejects Elon Musks Bid Further Escalating The Feud

    February 17, 20253 Views

    Optimize Inventory Management with AI for Small Online Retailers

    February 17, 20253 Views
    Latest Reviews
    ailogicnews.ai
    © 2025 Lee Enterprises

    Type above and press Enter to search. Press Esc to cancel.