
Originally published byThe Verge
OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and safety concerns. Friends, family members, or caregivers designated as a "Trusted Contact" will be notified if OpenAI detects that a person may have discussed topics like self-harm or suicide with the chatbot.
"Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference," OpenAI said in its announcement. "It offers another layer of support alongside the localized helplines already available …
🇺🇸
More news from United StatesUnited States
NORTH AMERICA
Related News
Building a safe, effective sandbox to enable Codex on Windows
8h ago
KDE Receives $1.4 Million Investment From Sovereign Tech Fund
1d ago
Instagram’s new ‘Instants’ feature combines elements from Snapchat and BeReal
1d ago
Six Claude Code Skills That Close the AI Agent Feedback Loop
1d ago
Global Accessibility Awareness Day 2026: A Small-Business Action Plan for the Week Leading Up to May 21
3h ago