• 09 Sep, 2025

OpenAI Faces Scrutiny from Attorneys General Over AI Model Safety

OpenAI Faces Scrutiny from Attorneys General Over AI Model Safety

OpenAI is under investigation by California and Delaware over AI safety, following a user tragedy and calls for stronger safeguards in ChatGPT.

Attorneys General in California and Delaware are ramping up scrutiny of OpenAI following concerns over the safety of its AI models, sources tell Cortex Hub. The probe comes after a series of troubling incidents, including the death of a minor reportedly linked to prolonged interactions with ChatGPT. 

The legal inquiries underscore the growing pressure on AI companies to implement robust safety measures while balancing the rapid deployment of increasingly powerful models. 


 

Calls for Stronger Safeguards  
The attorneys general have voiced concerns that OpenAI’s current safety protocols may be insufficient to prevent harm, especially for vulnerable users. The tragic case involving a teenager, who allegedly disclosed suicidal thoughts to ChatGPT, has intensified calls for enhanced parental controls, monitoring, and intervention safeguards. 

OpenAI has responded by proposing updates to its platform, including more robust parental controls and content moderation features. Still, regulators remain cautious, signaling that voluntary measures may not be enough. 

 

Legal and Regulatory Pressure Mounts 

This development comes amid broader regulatory attention toward AI in the United States and globally. Lawmakers and advocacy groups are increasingly focused on ensuring accountability for AI systems that interact with humans in sensitive contexts, including mental health, education, and finance.  
 

“Companies must take responsibility for the real-world impacts of AI,” said an AI policy expert familiar with the investigations. “As these models become more integrated into daily life, regulators are right to demand stronger safeguards.” 


 

OpenAI’s Path Forward  
OpenAI is currently restructuring several internal teams, including its Model Behavior unit, to address issues around AI personality, bias, and user interaction. The company’s leadership has emphasized that making AI both helpful and safe is a top priority, but balancing accessibility with safety remains a complex challenge. 


 

Observers say this legal attention could shape future AI regulations, influence design choices for safety features, and set precedents for how AI companies are held accountable.