03/11/2025
LYFE MONDAY | NOV 3, 2025
23 Can AI help with difficult decisions?
W HEN Katie Moran decided to break up with her boyfriend of six months in April this year, she credited an unlikely source of support: ChatGPT. “It made me reflect and have a conversation with myself that I was avoiding,” the 33-year-old based in New Jersey said of the chatbot, which she affectionately refers to as “Chat”. Though she confided in her friends and family, she said it was ChatGPT who helped her realise that her relationship was the source of her anxiety. After going back and forth with the chatbot for a week, she decided to end it. Most people are accustomed to turning to friends, family or a therapist for advice on major life decisions such as breakups, career changes or moving to a different country. But now, some people are turning to artificial intelligence (AI) for on-demand, judgment-free gut checks. While some such as Moran credit AI with giving them the confidence they need to make difficult choices, experts advise caution, noting that AI’s sycophantic nature can make for misleading results. For Julie Neis, it was burnout that ultimately led her to confide in ChatGPT. She had been working in San Francisco’s tech scene for three years when she said she became overcome with anxiety, depression and chronic fatigue. “I finally got to the point where I was like, I have to do something and change something. I was a shell of a human,” she said of that period late last year. So she resolved to move – to France, specifically – and turned to ChatGPT for guidance. After detailing her criteria (a quiet town, with a decently sized expat community) and her red lines (no busy cities such as Paris), the chatbot issued its recommendation: Uzes, a small o Users turn to chatbots for life advice DATA from ChatGPT-maker OpenAI suggest that more than a million of the people using its generative artificial intelligence (AI) chatbot have shown interest in suicide. In a recent blog post, the AI company estimated that approximately 0.15% of users have “conversations that include explicit indicators of potential suicidal planning or intent.” With OpenAI reporting more than 800 million people use ChatGPT every week, this translates to about 1.2 million people. The company also estimates that approximately 0.07% of active weekly users show possible signs of mental health emergencies related to psychosis or mania – meaning slightly fewer than 600,000 people. The issue came to the fore after
Some people credit AI with giving them the confidence they need to make difficult choices. – REUTERSPIC
then the user comes back,” he added. This was the case with Moran, who said she was surprised by how ChatGPT spoke like a friend, telling her things such as, “You deserve someone who reassures you – not someone whose silence makes you spiral.” None of those who spoke with Reuters said they regret relying on AI for decision-making. For Brown, it acted as a “passionate, neutral observer.” For Moran, it was akin to a “best friend.” Neis, meanwhile, said it helped her realise what she wanted. Still, Boussioux offered a note of caution, warning that offloading our decision-making to AI runs the risk of dulling our own problem-solving skills. “I would say take a step back and reflect on the beauty of having to make decisions ourselves and to make sure that we are also doing the thinking,” he said. – Reuters
noting that he trusted the chatbot to give him a “credible” view on the situation. Leonard Boussioux, a professor at University of Washington Foster School of Business who researches how human-AI collaboration can improve decision-making, said he understands why people are turning to AI in this way. It is available 24/7, can provide answers much quicker than most humans and can be seen to be more objective too. “AI tends to be more diplomatic,” Boussioux said, whereas “humans tend to be very opinionated, especially with personal advice.” However, Boussioux warned that because most AI models “tend to be sycophantic”, they may not be as concerned with giving the best advice as they are with pleasing the user. “They’ve been trained to be pleasing the user because if you please the user,
decisions without asking ChatGPT what they should do,” Altman said at a talk at Sequoia Capital’s AI Ascent event in May, referring to users in their 20s and 30s. “It has the full context on every person in their life and what they’ve talked about.” (OpenAI did not respond to a request for comment.) But it is not just young people who are turning to AI in this way. Mike Brown of Kansas City, Missouri, was in his early 50s when, in 2023, he decided to confide in a chatbot for advice on what to do about his marriage of 36 years. Although his friends, priest and marriage counselor all advised that he file for divorce, it was not until he had a 30-minute conversation with Pi.ai, an interactive chatbot launched that same year, that he said he felt sure of his decision. “I need to play these thoughts through and need affirmation that this really is the direction,” Brown said,
California teenager Adam Raine died by suicide earlier this year. His parents filed a lawsuit claiming ChatGPT provided him with specific advice on how to kill himself. OpenAI has since increased parental controls for ChatGPT and introduced other guardrails, including expanded access to crisis hotlines, automatic rerouting of sensitive conversations to safer models and gentle reminders for users to take breaks during extended sessions. OpenAI said it has also updated its ChatGPT chatbot to better recognise and respond to users experiencing mental health emergencies, and is working with more than 170 mental health professionals to significantly reduce problematic responses. – AFP town in the south of France, population 8,300. Neis moved there in April and said that outsourcing the decision-making process to ChatGPT helped her feel less overwhelmed by the whole process. Still, she said, it was not perfect. Although Uzes does have a sizable population of expats from the US and the UK, what ChatGPT failed to mention was that most of those people are retirees. Neis is 44. About half of messages entered into ChatGPT fall under the category of “asking”, which it classifies as “seeking information or clarification to inform a decision”, according to a recent study by OpenAI, the developer of ChatGPT. OpenAI CEO Sam Altman noted that this trend is most pronounced among younger users. “They don’t really make life
One million ChatGPT users talk about suicide: OpenAI
OpenAI says it has updated its ChatGPT chatbot to better recognise and respond to users experiencing mental health emergencies. – 123RFPIC
Made with FlippingBook - professional solution for displaying marketing and sales documents online