Saturday, November 29, 2025

A Analysis Chief Behind ChatGPT’s Psychological Well being Work Is Leaving OpenAI


An OpenAI security analysis chief who helped form ChatGPT’s responses to customers experiencing psychological well being crises introduced her departure from the corporate internally final month, WIRED has discovered. Andrea Vallone, the top of a security analysis workforce often called mannequin coverage, is slated to go away OpenAI on the finish of the yr.

OpenAI spokesperson Kayla Wooden confirmed Vallone’s departure. Wooden mentioned OpenAI is actively searching for a substitute and that, within the interim, Vallone’s workforce will report on to Johannes Heidecke, the corporate’s head of security techniques.

Vallone’s departure comes as OpenAI faces rising scrutiny over how its flagship product responds to customers in misery. In latest months, a number of lawsuits have been filed in opposition to OpenAI alleging that customers fashioned unhealthy attachments to ChatGPT. A number of the lawsuits declare ChatGPT contributed to psychological well being breakdowns or inspired suicidal ideations.

Amid that stress, OpenAI has been working to know how ChatGPT ought to deal with distressed customers and enhance the chatbot’s responses. Mannequin coverage is likely one of the groups main that work, spearheading an October report detailing the corporate’s progress and consultations with greater than 170 psychological well being consultants.

Within the report, OpenAI mentioned lots of of hundreds of ChatGPT customers could present indicators of experiencing a manic or psychotic disaster each week, and that greater than one million individuals “have conversations that embrace express indicators of potential suicidal planning or intent.” Via an replace to GPT-5, OpenAI mentioned within the report it was in a position to cut back undesirable responses in these conversations by 65 to 80 %.

“Over the previous yr, I led OpenAI’s analysis on a query with virtually no established precedents: how ought to fashions reply when confronted with indicators of emotional over-reliance or early indications of psychological well being misery?” wrote Vallone in a publish on LinkedIn.

Vallone didn’t reply to WIRED’s request for remark.

Making ChatGPT satisfying to speak with, however not overly flattering, is a core rigidity at OpenAI. The corporate is aggressively making an attempt to increase ChatGPT’s person base, which now consists of greater than 800 million individuals per week, to compete with AI chatbots from Google, Anthropic, and Meta.

After OpenAI launched GPT-5 in August, customers pushed again, arguing that the brand new mannequin was surprisingly chilly. Within the newest replace to ChatGPT, the corporate mentioned it had considerably lowered sycophancy whereas sustaining the chatbot’s “heat.”

Vallone’s exit follows an August reorganization of one other group targeted on ChatGPT’s responses to distressed customers, mannequin habits. Its former chief, Joanne Jang, left that position to start out a brand new workforce exploring novel human–AI interplay strategies. The remaining mannequin habits workers have been moved below post-training lead Max Schwarzer.

Related Articles

Latest Articles