OpenAI has released new estimates of the number of ChatGPT users who exhibit possible signs of mental health emergencies, including mania, psychosis or suicidal thoughts.

The company said that around 0.07% of ChatGPT users active in a given week exhibited such signs, adding that its artificial intelligence (AI) chatbot recognizes and responds to these sensitive conversations.

While OpenAI maintains these cases are extremely rare, critics said even a small percentage may amount to hundreds of thousands of people, as ChatGPT recently reached 800 million weekly active users, according to CEO Sam Altman.

As scrutiny mounts, the company said it built a network of experts around the world to advise it on these matters.

Those experts include more than 170 psychiatrists, psychologists, and primary care physicians who have practiced in 60 countries, the company said.

They have devised a series of responses in ChatGPT to encourage users to seek help in the real world, according to OpenAI.

But the glimpse at the company's data raised eyebrows among some mental health professionals.

Even though 0.07% sounds like a small percentage, at a population level with hundreds of millions of users, that can actually translate to quite a few people, said Dr. Jason Nagata, a professor studying technology use among young adults at the University of California, San Francisco.

AI can enhance access to mental health support, but it is imperative to recognize its limitations, Nagata added.

OpenAI also estimates that 0.15% of ChatGPT users have conversations that explicitly indicate potential suicidal planning or intent.

Recent updates to the chatbot aim to respond safely and empathetically to signs of delusion or mania and note indirect signals of potential self-harm or suicide risk.

In response to queries about the implications of these statistics, OpenAI acknowledged that even a small percentage represents a significant number of people and that they are taking the feedback seriously.

The company is facing legal scrutiny over how ChatGPT interacts with users. Recently, a couple sued OpenAI after alleging the chatbot encouraged their teenage son to take his own life.

This lawsuit marks the first legal action claiming wrongful death attributed to OpenAI's technology. In another case, a suspect in a murder-suicide posted extensive conversations with ChatGPT that allegedly fueled their delusions.

As the impact of AI on mental health becomes more pronounced, Professor Robin Feldman of the University of California Law notes that the technology creates a compelling illusion of reality. She applauds OpenAI's efforts to address these issues, while cautioning that individuals at risk may not heed warnings.