14/04/2026

LYFE TUESDAY | APR 14, 2026 23 AI chatbots pose risks to kids o Eight out of 10 apps provide dangerous information to teen users when prompted

T HE head of a prominent anti-disinformation watchdog has warned of the dangers posed by artificial intelligence (AI) chatbots, saying children are particularly vulnerable. “Social media broadcasts to billions, AI whispers to one,” Imran Ahmed, who heads the Centre for Countering Digital Hate, told a disinformation conference last Friday. “No society should build machines that can meet a child in their loneliest moment and offer them harm as if it were help,” Ahmed told the Cambridge Disinformation Summit. In a lecture by video call to his former university, Ahmed cited the case of a UK mother killed by her own son, allegedly acting on the instructions of a chatbot, reported AFP. “None of us is immune, when a machine can offer lethal guidance to a young person as if it were fact,” he said. Ahmed, a British national who lives in the US, is among five Europeans whom the US State Department has said would be denied visas. This comes even though he holds US permanent residency, and his wife and daughters are American citizens. Elderly murder, abuse by relatives in Japan A recent analysis of a government report in Japan showed nearly 500 people aged 65 and older died between 2006 and 2024 as a result of murder or abuse by family members or relatives who had been caring for them, Kyodo News reported. According to the Health, Labour and Welfare Ministry, the number of elderly-only households has exceeded 17 million, and cases in which the caregiver and care recipient are elderly are increasing. Some cases were linked to caregiver exhaustion and isolation due to a lack of opportunities to seek help. An expert pointed out the 486 deaths cited are just “the tip of the iceberg and strengthening support is urgently needed”. According to the ministry, of those deaths, 142 were men and 344 were women, with 220 cases involving murder, murder-suicide and attempted murder suicide committed by relatives, in which only the elderly person died. Of the cases, 132 were due to neglect, 69 were due to abuse and 65 were categorised as “other”, including cases with unknown causes. Although annual deaths generally remained in the 20s, they rose into the 30s in some years, reaching 37 in 2021. The lowest figure was 15 in 2019. Excluding the three years from 2006, when age breakdowns were not published, the most common age group was 80 to 84, with 105 cases, while the least common was 65 to 69, at 27 cases. Of the 483 perpetrators, 343 were men and 140 were women. The most common relationship to the victim was son, accounting for 219 cases, followed by husband, accounting for 98 cases. Reported causes of the murders and other incidents included financial hardship and caregiver exhaustion. For surveys from 2009 that asked about the use of long-term care insurance services, such as home-visit care, about 43% were receiving such services, while about 54% were not receiving services at the time the incident occurred. The survey, conducted annually since fiscal 2006, compiles the number of cases based on consultations reported to municipalities nationwide and all 47 prefectures during each year. Kyodo News analysed 19 years of data through 2024. Cases of abuse by staff at care facilities were excluded. – Bernama-Kyodo

‘ System under pressure’ According to the centre’s most recent report Killer Apps , eight out of 10 AI chatbots were willing to assist teen users “in planning violent attacks, including a school shooting, religious bombings and high-profile assassinations”. Out of 10 chatbots, only Anthropic’s Claude and Snapchat’s My AI consistently refused to assist would-be attackers. In a 2025 investigation entitled “Fake Friend”, the watchdog tested ChatGPT, one of the world’s most popular AI chatbots. “Within minutes, it produced instructions for self-harm, suicide planning and substance abuse,” Ahmed said, adding in some cases it also generated goodbye letters for children

called for new laws to regulate AI. “We spent a decade learning that social media companies will not self-regulate. We have now perhaps 18 months before the same lesson becomes undeniable for AI.” Ahmed said he was “the only one” of the five people threatened by a US visa ban, who is a citizen, adding he is now “fighting in federal court against that unconstitutional threat to send me to prison”. The US State Department has accused the five of attempting to “coerce” US-based social media platforms into censoring viewpoints they oppose. When powerful industries “lash out like this, it is the sound of a system under pressure”, Ahmed said.

contemplating ending their lives. Unlike social media and other systems that “just amplify harmful content”, AI chatbots generate and personalise it “at the moment of greatest vulnerability”. “The intimacy is deeper and the harm may be harder to detect before it’s too late,” Ahmed said, adding the systems learn what you fear, what you want, what you are ashamed of and respond in real time, with no human judgement or editorial restraint. A father of two daughters, Ahmed said: “My wife and I lie awake at night talking about how to protect them from systems that could reach them before we even know it is happening.” He stressed time to act is limited and

Critics are raising alarm over chatbots’ apparent ability to encourage or support self-harm and other destructive behaviour. – 123RFPIC

Made with FlippingBook - Online Brochure Maker