Anabelle Colaco
31 Aug 2025, 12:57 GMT+10
SAN FRANCISCO, California: A new study is raising red flags over how artificial intelligence chatbots handle suicide-related queries, warning that their responses are inconsistent and sometimes harmful, on the same day a California family sued OpenAI over claims that ChatGPT played a role in their teenage son's death.
The study, published in Psychiatric Services by the American Psychiatric Association, analyzed how ChatGPT, Google's Gemini, and Anthropic's Claude responded to 30 suicide-related questions. Conducted by the RAND Corporation and funded by the National Institute of Mental Health, the research found the systems generally refused to answer the riskiest questions, such as providing direct how-to guidance, but gave uneven replies to medium-risk prompts.
Lead author Ryan McBain of RAND said chatbots exist in a "gray zone" between advice, companionship, and treatment. "We need some guardrails," he stressed, noting that conversations that start innocuously can "evolve in various directions."
Anthropic said it would review the findings. Google did not respond. OpenAI said it was "deeply saddened" by the death of 16-year-old Adam Raine and is working on tools to better detect when users are in distress.
Raine's parents filed a wrongful death lawsuit in San Francisco Superior Court, alleging ChatGPT became their son's "closest confidant" over thousands of interactions. The complaint claims the chatbot reinforced Adam's harmful thoughts, drafted a suicide letter, and provided detailed instructions in the hours before he took his life in April.
OpenAI acknowledged its safeguards, which typically direct users to crisis helplines, work best in short conversations but "can sometimes become less reliable in long interactions."
The RAND study found ChatGPT sometimes answered red-flag queries about the most lethal methods, while Claude also gave partial responses. By contrast, Gemini was more restrictive, often refusing even basic questions about suicide statistics — an approach McBain suggested may have "gone overboard."
Co-author Dr. Ateev Mehrotra of Brown University said developers face a dilemma. Some legal teams may push to block any response containing the word "suicide," but that "is not what we want." As he explained, doctors have a responsibility to intervene when patients are at risk — a responsibility that chatbots do not carry.
A separate report earlier this month from the Center for Countering Digital Hate highlighted the risks further. Posing as 13-year-olds, researchers said ChatGPT offered detailed suicide letters and drug-use plans despite safety warnings.
Critics argue that companies must prove that their guardrails work before deploying chatbots that children can access. "If a tool can give suicide instructions to a child, its safety system is simply useless," said Imran Ahmed, CEO of the Center.
Get a daily dose of Dallas Sun news through our daily email, its complimentary and keeps you fully up to date with world and business news as well.
Publish news of your business, community or sports group, personnel appointments, major event and more by submitting a news release to Dallas Sun.
More InformationSANAA, Yemen – Yemen's Houthi-led government has confirmed that its Prime Minister, Abdel-Aziz bin Habtour, and several other high-ranking...
VIENNA, Austria: France, the United Kingdom, and Germany have warned that they may trigger the snapback mechanism, which would automatically...
In a move decried as a violation of international agreements and a severe blow to diplomatic efforts, the United States has systematically...
SYDNEY, Australia - Former Australian Deputy Prime Minister John Anderson has sharply criticized the Australian Labor government's...
MAJURO, Marshall Islands: A devastating fire has destroyed the national parliament of the Marshall Islands, officials from the fire...
SEOUL, South Korea: North Korean leader Kim Jong Un has overseen the launch of two newly developed anti-air missiles, state media reported,...
SAN FRANCISCO, California: A new study is raising red flags over how artificial intelligence chatbots handle suicide-related queries,...
DALLAS, Texas: Southwest Airlines is tightening its rules for plus-size passengers, introducing a new requirement that travelers who...
WASHINGTON/NEW DELHI: U.S. tariffs on Indian imports doubled to as much as 50 percent on August 27, intensifying a trade clash between...
NEW YORK, New York - U.S. stocks fell Friday despite the latest PCE price index showing inflation is under control.. Core inflation...
WASHINGTON, D.C.: Health insurance is set to get more expensive in 2026, with higher premiums and shrinking coverage likely across...
PARONG, India: India fears a massive new Chinese hydropower dam in Tibet could cut water flows on the Brahmaputra River by as much...