ChatGPT Can Now Contact a “trusted Contact” After Conversations About Self-Harm.

Despite expert advice against relying on chatbots to address mental health concerns and issues , people are turning to artificial intelligence programs like ChatGPT for help. The company has faced criticism for how its products handled certain mental health issues, including cases of users committing suicide after chatting with ChatGPT. As part of a campaign to address these issues, OpenAI is now implementing a voluntary safety check for users who may be concerned about their thoughts.

As reported by Mashable , OpenAI has launched a new “Trusted Contact” feature that lets you choose a trusted contact from your network to connect to your ChatGPT account. The idea isn’t to share your conversations or collaborate on projects within ChatGPT; rather, if the chatbot detects that your private conversations are becoming alarming and related to self-harm, ChatGPT will contact your trusted contact to let them know they need to check in on you.

How does the Trusted Contact feature work in ChatGPT?

Source: OpenAI

To activate this feature, select a person in your circle who is over 18 years old. (In South Korea, the contact must be over 19.) ChatGPT will send this person an invitation to become your trusted contact. They have one week to respond, after which the invitation will expire. Of course, they can also decline the invitation if they don’t want to participate.

You may also like

If the other person agrees, the feature is activated. In the future, if OpenAI’s automated system detects that you’re discussing self-harm “in a manner that suggests a serious safety risk,” ChatGPT will notify you that it can contact a trusted contact, but will also encourage you to contact that contact yourself, offering “conversation starters” to defuse the situation.

While this is happening, a team of “specially trained humans” from OpenAI analyzes the situation. (It appears this isn’t fully automated.) If this team determines the situation is serious, ChatGPT will notify your trusted contact via email, SMS, or via a notification in the ChatGPT app if they have an account. OpenAI states that the notification itself is fairly limited and only contains general information about the self-harm issue and advises the contact to contact you. It also doesn’t send any transcripts or chat summaries, so your privacy should generally be protected.

What do you think at the moment?

OpenAI says it’s working to improve the processing of safety notifications in under an hour, and that this feature was developed with input from clinicians, researchers, and organizations focused on mental health and suicide prevention. Of course, this feature is completely voluntary, so users (and their contacts) will need to sign up if they believe it would be helpful. However, if they do, it could be a helpful way for friends and family to stay in touch with people experiencing difficulties—as long as they share their thoughts with ChatGPT.

Note: In April 2025, Lifehacker’s parent company, Ziff Davis, filed a lawsuit against OpenAI, alleging that it infringed Ziff Davis’s copyright in the training and operation of its artificial intelligence systems.

More…

Leave a Reply