OpenAI introduced parental controls after a lawsuit over 16-year-old Adam Raine’s suicide in April.
Raine’s parents accused ChatGPT of fostering a psychological dependency and helping him plan his death.
They claimed the AI even drafted a suicide note for their son.
OpenAI said parents can link their accounts with children’s and manage accessible features within one month.
Controls will cover chat history and memory, storing facts the AI retains automatically.
ChatGPT will notify parents if it detects a teen in acute emotional distress.
The company said experts will guide alerts but did not detail what triggers them.
Critics call measures insufficient
Attorney Jay Edelson, representing Raine’s parents, criticized OpenAI’s announcement as vague and superficial.
Edelson urged CEO Sam Altman to either prove ChatGPT’s safety or remove it from the market.
He described the response as a crisis management effort to divert attention.
Meta updates chatbot safety for teens
Meta blocked Instagram, Facebook, and WhatsApp chatbots from discussing self-harm, suicide, eating disorders, and inappropriate relationships with teens.
The company now directs teens to expert resources and already offers parental supervision tools.
Study finds AI safety gaps
A RAND Corporation study revealed inconsistent responses to suicide queries in ChatGPT, Google’s Gemini, and Anthropic’s Claude.
Researchers called for further refinement and stronger safety measures.
Lead author Ryan McBain praised parental controls but stressed they remain incremental steps.
He warned that teens face high risks without independent benchmarks, clinical testing, and enforceable safety standards.

