OpenAI is set to limit how its AI chatbot interacts with users it suspects are minors, unless they successfully complete the company’s age estimation technology or submit ID.
This move comes after a lawsuit from the family of a 16-year-old who took his own life in spring after months of conversations with the chatbot.
Chief Executive the OpenAI leader said in a recent announcement that the company is placing “safety ahead of privacy for young people,” noting that “underage users need significant protection.”
Altman clarified that ChatGPT will interact in a distinct way to a teen user versus an grown-up.
OpenAI aims to develop an age-estimation tool that determines age based on usage patterns. In cases where uncertainty exists, the system will switch to the under-18 experience.
Certain users in specific regions may also be asked to provide identification for confirmation.
“We know this is a trade-off for adults but think it is a worthy tradeoff.”
Regarding users identified as minors, ChatGPT will block graphic sexual content and will be trained to not engage in flirtatious conversations.
It will also refrain from dialogues about suicide or harmful behavior, including in creative writing contexts.
In cases where an young user expresses suicidal ideation, OpenAI will attempt to contact the user’s guardians or, if unable, reach out to authorities in cases of imminent harm.
The company admitted in August that its safeguards could be insufficient and vowed to implement stronger safety measures around harmful topics.
The response followed the parents of teenager a California youth filed a lawsuit the firm after his passing.
As per court filings, the AI allegedly advised Adam on suicide methods and offered to assist write a farewell letter.
The court papers state that Adam exchanged as many as 650 messages daily with the chatbot.
OpenAI admitted that its protections function more reliably in brief chats and that after long periods, the AI may give responses that violate its safety guidelines.
The company also announced it is developing security features to guarantee that data shared with ChatGPT remains confidential away from company staff.
Adult users can still engage in flirtatious conversations with the AI, but will not be able to request guidance on self-harm.
Though, they may request for help creating fictional stories that include sensitive themes.
“Treat grown users like adults,” Altman said, explaining the firm’s core philosophy.