•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

OpenAI has announced a new feature for ChatGPT called Trusted Contact, allowing adult users to designate a trusted family member or friend as an emergency contact within their account.
When ChatGPT detects a conversation with signs related to self-harm, the system will encourage the user to reach out to that person. At the same time, OpenAI will automatically send a warning to the designated contact, asking them to check on the user. The notification is concise and does not include the content of the conversation to protect privacy. The alert can arrive via email, text message, or in-app notification.
OpenAI said it currently combines automated systems and human reviewers to handle potentially harmful situations. When the system detects signs related to suicidal intent, the information will be escalated to the company’s safety team for direct review. OpenAI’s stated goal is to review such notices within one hour.
Only if the safety team determines there is a serious risk will alerts to the trusted contact be activated.
The announcement comes as OpenAI faces lawsuits from families of people who took their own lives after interacting with ChatGPT. Some families allege that the chatbot encouraged their loved ones to harm themselves and even helped plan wrongdoing.
Trusted Contact builds on safeguards OpenAI introduced last September, when the company allowed parents to monitor their minor children’s accounts and receive alerts when the system detected that children faced serious safety risks. Before that, ChatGPT also included a feature that automatically reminded users to seek help from health professionals when conversations touched on self-harm topics.
OpenAI emphasized that Trusted Contact is completely optional, and users can choose not to enable it. The company also noted that even with the feature enabled, users can still create multiple ChatGPT accounts. Similar constraints apply to parental controls, which are also optional rather than mandatory.
“Trusted Contact is part of OpenAI’s broader effort to build AI systems that support people in difficult moments,” the company said in a press release. OpenAI also pledged to continue working with health professionals, researchers, and policymakers to improve how AI responds when users are experiencing a mental health crisis.
Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…