Google Is Changing the Way Gemini Handles Mental Health Crises for Its Users.

When companies like OpenAI and Google began commercializing generative AI models, I doubt they foresaw how deeply attached people would become to this technology—and the impact it would have on their collective mental health. Some ChatGPT users were genuinely upset when OpenAI shut down its GPT-4o model , as they treated this particular model like a companion. Others have taken darker paths with their chatbots, leading to lawsuits against AI companies whose technology is alleged to have advised and encouraged suicidal thoughts. This situation is putting enormous pressure on these companies, and rightly so: generative AI is now having a huge impact, and the developers of this technology bear a great responsibility.
It’s against this backdrop that Google’s latest Gemini updates emerge. In a press release published Tuesday morning, the company sidestepped any exciting new features or capabilities for its flagship AI; instead, Google’s latest updates focus on mental health and how Gemini impacts the emotions and moods of those using it. Specifically, Google outlined three key features being implemented to improve Gemini’s performance in challenging situations.
How Gemini will support users in crisis situations
Google says Gemini has been updated to “make it easier for those who need support to access it.” The company says that when the AI detects that a user may need mental health information during a chat, Gemini will display a new “Help Available” module that can direct users to information and services. Google says it worked with clinical experts to develop this module.
On the other hand, if Gemini determines a user is at risk of self-harm or suicide, the system will provide a one-touch interface that immediately connects them to a crisis hotline. Users can call or message the hotline, or visit its website, directly from Gemini chat. Even if the conversation diverges, Gemini will keep these resources available to users should they need them.
Google announced it will commit $30 million over the next three years to fund global crisis hotlines. The company is also expanding its partnership with ReflexAI, including a $4 million funding commitment.
Gemini is changing its approach to responding to “mental health emergencies.”
Google states that its clinical, engineering, and security teams are currently focused on improving Gemini’s ability to respond to such complex situations. Specifically, the focus is on three areas:
-
Security and human connection : In times of crisis, Google aims to connect users with real people, not AI-powered chatbots.
-
Improved responses : AI responses should encourage users to seek help rather than justify harmful behavior or self-harm.
-
Avoiding false belief confirmation : Google claims to have trained Gemini to avoid reinforcing false beliefs and to “softly” distinguish between subjective and objective reality. This is particularly important, as previous generative AI models (particularly GPT-4o) were overly prone to confirming users’ delusional thoughts.
What Google says it’s doing with Gemini to protect young users.
The most important discussion here concerns minors and their interactions with AI. For its part, Google touts its measures to protect young users as part of Project Gemini, including:
-
It is believed that “personality protection” prevents Geminis from acting as companions when interacting with minors.
-
There are solutions in place to prevent younger users from engaging too deeply with Gemini to prevent emotional addiction.
-
Geminis will avoid encouraging either bullying or harassment.
While user safety is important for everyone, it’s especially crucial for young people, who are literally growing up with technology. These statements from Google are encouraging, but I still have many concerns, not to mention skepticism. Meta’s internal policies regarding how its models interact with minors have been appalling , so I’m not necessarily ready to believe that large tech companies care about the well-being of young people. But I certainly welcome any work that helps prevent young users from becoming addicted to AI or feeding it dangerous or harmful thoughts.