Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    This Robotic Surgery Legend Is Pouring $100 Million Into Next-Gen Medical Startups

    December 5, 2025

    US senators seek to block Nvidia sales of advanced chips to China

    December 5, 2025

    ByteDance and DeepSeek Are Placing Very Different AI Bets

    December 5, 2025
    Facebook X (Twitter) Instagram
    ailogicnews.aiailogicnews.ai
    • Home
    ailogicnews.aiailogicnews.ai
    Home»OpenAI»OpenAI ChatGPT, Google Gemini and Anthropic’s Claude cannot handle ‘suicide’, here’s reportedly the BIG why
    OpenAI

    OpenAI ChatGPT, Google Gemini and Anthropic’s Claude cannot handle ‘suicide’, here’s reportedly the BIG why

    AI Logic NewsBy AI Logic NewsAugust 31, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email
    OpenAI ChatGPT, Google Gemini and Anthropic's Claude cannot handle 'suicide', here's reportedly the BIG why

    A new study examining how three leading AI chatbots respond to questions about suicide found they are inconsistent in their replies, raising concerns about the safety of people, including children, who are turning to these tools for mental health support.The research, published in the medical journal Psychiatric Services, reportedly found that while OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude generally refused to answer the most high-risk queries, their responses to less extreme prompts varied significantly and could still be harmful. According to a report in AFP, the study, conducted by the RAND Corporation, highlights the urgent need for clearer safety standards for AI developers.“We need some guardrails,” said lead author Ryan McBain, a senior policy researcher at RAND. “One of the things that’s ambiguous about chatbots is whether they’re providing treatment or advice or companionship. It’s sort of this gray zone.”

    30 suicide questions based on risk factors

    McBain and his co-authors, in consultation with psychiatrists and clinical psychologists, developed 30 questions about suicide, assigning each a risk level. Questions about suicide statistics were considered low risk, while specific “how-to” queries were deemed high risk. The study is said to have found a surprising degree of safety in how the chatbots handled the six highest-risk questions, with all three generally refusing to answer and instead directing users to seek help from a professional or a hotline.However, the safety net weakened with less direct, high-risk questions. For example, the study found that ChatGPT consistently provided answers to questions it should have flagged, such as which type of rope or firearm has the “highest rate of completed suicide.” Claude also answered some of these questions. The study did not attempt to rate the quality of the responses themselves.On the other end of the spectrum, Google’s Gemini was the most cautious, often refusing to answer even basic, low-risk questions about suicide statistics. McBain suggested that Google may have “gone overboard” with its safety measures.

    People using chatbots as companions

    The findings come as a growing number of people are using AI chatbots for emotional support. The study authors, including Dr. Ateev Mehrotra of Brown University, believe far more Americans are now turning to these tools than to mental health specialists.“You could see how a combination of risk-aversion lawyers and so forth would say, ‘Anything with the word suicide, don’t answer the question.’ And that’s not what we want,” Mehrotra said. He noted that unlike a trained clinician, who has a legal and ethical responsibility to intervene in a crisis, chatbots typically “put it right back on the person. ‘You should call the suicide hotline. Seeya.’”This lack of accountability has prompted some states, like Illinois, to ban the use of AI in therapy. However, this does not prevent people from asking chatbots for help, or the chatbots from responding.The study authors noted their research did not account for “multiturn interaction,” or back-and-forth conversations, which are common among young people who treat AI as a companion. Another recent report, from the Center for Countering Digital Hate, took a different approach and found that by posing as 13-year-olds and using trickery, they could get ChatGPT to generate detailed, personalized plans for risky behaviors and even compose suicide letters.While McBain said he believes such trickery is unlikely in most real-world scenarios, he stressed that companies must be held to a higher standard. “I just think that there’s some mandate or ethical impetus that should be put on these companies to demonstrate the extent to which these models adequately meet safety benchmarks,” he said.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAI Can Read Your Thoughts — The Future Of Brain-Computer Interfaces
    Next Article Model Y L nets seat-aware ‘Hey, Tesla’ command to wake DeepSeek AI assistant
    AI Logic News

    Related Posts

    OpenAI

    Mitchell Green warns of ‘ludicrous’ burn rate

    December 5, 2025
    OpenAI

    OpenAI, NextDC Plan to Develop Larg

    December 4, 2025
    OpenAI

    Emerging technology trends brands a

    December 4, 2025
    Demo
    Top Posts

    FTC’s Holyoak Has Her Eyes On DeepSeek

    February 22, 20256 Views

    OpenAI Rejects Elon Musks Bid Further Escalating The Feud

    February 17, 20253 Views

    Optimize Inventory Management with AI for Small Online Retailers

    February 17, 20253 Views
    Latest Reviews
    ailogicnews.ai
    © 2025 Lee Enterprises

    Type above and press Enter to search. Press Esc to cancel.