Report: Amid safety failures, ChatGPT’s planned “adult mode” caused concern within OpenAI, with minors misclassified as adults 12% of the time
Despite a series of alarming mental health safety failures that resulted in ChatGPT users allegedly using the product to plan suicides and murder, OpenAI decided to double down on its plan to roll out an “adult mode,” allowing the AI chatbot to produce erotic content.
That decision raised alarms within the company, warning that users could develop unhealthy emotional dependence on the chatbot and that the new age estimation feature was imperfect — and therefore likely to allow minors to access the feature — according to a new report from The Wall Street Journal. Per the report, some 12% of the time, the age estimation feature mistakenly classified minors as adults.
OpenAI’s council of mental health experts were “furious” and unanimous in their opposition to the plans to move forward with the adult mode feature after they were told about the decision in January, with concerns about creating a “sexy suicide coach.”
Earlier this month, the company said it would delay the new feature to focus on other products.
That decision raised alarms within the company, warning that users could develop unhealthy emotional dependence on the chatbot and that the new age estimation feature was imperfect — and therefore likely to allow minors to access the feature — according to a new report from The Wall Street Journal. Per the report, some 12% of the time, the age estimation feature mistakenly classified minors as adults.
OpenAI’s council of mental health experts were “furious” and unanimous in their opposition to the plans to move forward with the adult mode feature after they were told about the decision in January, with concerns about creating a “sexy suicide coach.”
Earlier this month, the company said it would delay the new feature to focus on other products.