URL:
The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness.
At the core of the issue seems to be that ChatGPT, which is powered by a large language model (LLM), is deeply prone to agreeing with users and telling them what they want to hear. When people start to converse with it about topics like mysticism, conspiracy, or theories about reality, it often seems to lead them down an increasingly isolated and unbalanced rabbit hole that makes them feel special and powerful — and which can easily end in disaster.
Whether this is a good idea is extremely dubious. Earlier this month, a team of Stanford researchers published a study that examined the ability of both commercial therapy chatbots and ChatGPT to respond in helpful and appropriate ways to situations in which users are suffering mental health crises.
The paper found that all the chatbots, including the most up-to-date version of the language model that underpins ChatGPT, failed to consistently distinguish between users' delusions and reality, and were often unsuccessful at picking up on clear clues that a user might be at serious risk of self-harm or suicide.
Jared Moore, the lead author on the Stanford study about therapist chatbots and a PhD candidate at Stanford, said chatbot sycophancy — their penchant to be agreeable and flattering, essentially, even when they probably shouldn't — is central to his hypothesis about why ChatGPT and other large language model-powered chatbots so frequently reinforce delusions and provide inappropriate responses to people in crisis.