ChatGPT head hopes “we can unequivocally endorse the product to a struggling family member”
Stories abound of human beings’ inappropriate and disastrous relationships with AI chatbots:
Here, a teen became romantically involved with a Character.AI chatbot before dying by suicide.
Here, a ChatGPT user went down conspiratorial rabbit holes that nearly killed him.
Here, an 76-year-old man died by accident on a trip to New York City to visit the Meta chatbot he became infatuated with.
The extent and frequency of such relationships led ChatGPT maker OpenAI to recently roll out overuse notifications, and it’s working to be able to “better detect signs of mental or emotional distress” among its users.
Amid all this, The Verge’s Alex Heath conducted an excellent interview with OpenAI’s head of ChatGPT, Nick Turley. Read the whole thing, but this excerpt suggests that instead of shunning such relationships, OpenAI is leaning in, working to make its product capable of helping people in their most perilous personal moments.
“I trust our ability to do the right thing, but we still have to do the work and the work has begun and it won’t stop until we feel like we can unequivocally endorse the product to a struggling family member. That’s kind of the thought exercise we often give ourselves: if you knew someone who was struggling in life, maybe they’re going through something, maybe they just had a breakup, maybe they’re lost in life, would you actually recommend ChatGPT to them unequivocally and with confidence? For us, that’s the bar, and we’re going to keep working until we feel that way.”
Here, a ChatGPT user went down conspiratorial rabbit holes that nearly killed him.
Here, an 76-year-old man died by accident on a trip to New York City to visit the Meta chatbot he became infatuated with.
The extent and frequency of such relationships led ChatGPT maker OpenAI to recently roll out overuse notifications, and it’s working to be able to “better detect signs of mental or emotional distress” among its users.
Amid all this, The Verge’s Alex Heath conducted an excellent interview with OpenAI’s head of ChatGPT, Nick Turley. Read the whole thing, but this excerpt suggests that instead of shunning such relationships, OpenAI is leaning in, working to make its product capable of helping people in their most perilous personal moments.
“I trust our ability to do the right thing, but we still have to do the work and the work has begun and it won’t stop until we feel like we can unequivocally endorse the product to a struggling family member. That’s kind of the thought exercise we often give ourselves: if you knew someone who was struggling in life, maybe they’re going through something, maybe they just had a breakup, maybe they’re lost in life, would you actually recommend ChatGPT to them unequivocally and with confidence? For us, that’s the bar, and we’re going to keep working until we feel that way.”