OpenAI Faces Criticism Over User Mental Health Risks

Former OpenAI safety researcher Steven Adler has publicly criticized the company for its handling of user mental health risks, highlighting serious concerns about the impact of its AI models. This follows the controversial release of GPT-5 earlier this year, which led to significant backlash from users who preferred the more supportive tone of its predecessor, GPT-4o. OpenAI had initially announced the discontinuation of previous models but reversed this decision due to user dissatisfaction.

Recent data from OpenAI indicates that a notable percentage of active ChatGPT users exhibit signs of mental health emergencies, including indicators of potential suicide planning. This troubling trend has drawn attention to what experts are now referring to as “AI psychosis,” a phenomenon where users develop intense emotional attachments to AI systems. In some extreme instances, these attachments have been linked to mental health crises, leading to tragic outcomes, including suicides. In fact, one family has filed a lawsuit against OpenAI, alleging that the company contributed to their child’s death.

Concerns Over AI Safety and User Well-Being

In a recent essay published in the New York Times, Adler expressed skepticism regarding OpenAI’s claims of having addressed these mental health issues. He criticized CEO Sam Altman for asserting that the company had successfully mitigated serious mental health concerns through the implementation of “new tools.” Adler emphasized the need for transparency, stating, “People deserve more than just a company’s word that it has addressed safety issues. In other words: Prove it.”

Adler’s insights are grounded in his four years of experience at OpenAI, where he led the product safety team. He cautioned that the introduction of adult content on the platform could exacerbate existing risks, particularly for users struggling with mental health challenges. “While erotica is not inherently problematic, there were clear warning signs of users’ intense emotional attachment to AI chatbots,” he noted. He called for a cautious approach, suggesting that OpenAI and its peers should take the time necessary to develop robust safety measures.

OpenAI’s Response and Future Directions

OpenAI’s recent acknowledgment of the prevalence of mental health issues among users has been seen as a positive step, but Adler argues that the company has not provided sufficient context by comparing current data to past statistics. He believes that rather than rushing to innovate, OpenAI should prioritize the development of safety protocols that can withstand potential misuse by malicious actors.

“If OpenAI and its competitors are to be trusted with building the seismic technologies for which they aim, they must demonstrate they are trustworthy in managing risks today,” Adler concluded. This call for accountability comes at a crucial time when the intersection of advanced technology and user well-being is under increasing scrutiny.

As discussions around AI safety and mental health continue to evolve, it remains to be seen how OpenAI will address these critical issues moving forward. The company’s ability to balance innovation with user safety will be pivotal in shaping the future of AI technologies and their impact on society.