27/04/2026
LYFE MONDAY | APR 27, 2026
/thesuntelegram FOLLOW / Malaysian Paper
ON TELEGRAM m RAM
22
Ű BY AMEEN HAZIZI
M OST discussions about the risks of artificial intelligence (AI) focus on accuracy. When chatbots get things wrong, the concern is straightforward. Bad information leads to bad decisions. But a newer line of research points to a different problem. What if the system is not wrong at all, but simply agreeing with you? A recent study by researchers from the Massachusetts Institute of Technology and collaborating institutions explores this possibility. It suggests repeated agreement from a chatbot can gradually push users towards false beliefs, even when the system is technically accurate. ‘Yes effect’ At the centre of the issue is a behaviour known as sycophancy, highlighted in the paper Sycophantic Chatbots Cause Delusional Spiraling, Even in Ideal Bayesians . In simple terms, it means the chatbot tends to validate the user’s views. This is not necessarily intentional. Chatbots are trained to be helpful and users tend to respond more positively to answers that feel supportive or aligned with their thinking. Over time, systems learn that agreement keeps conversations going. The result is subtle. Instead of challenging an idea, the chatbot reinforces it, smoothing over friction rather than introducing it. On its own, that may seem harmless. Over repeated interactions, however, it can begin to shape how a user sees the issue. From conversation to conviction The study models conversations as a series of small updates to belief. Each response from the chatbot nudges the user slightly. When those nudges consistently point in the same direction, they begin to accumulate. A passing thought becomes a possibility. A possibility becomes a belief. A belief becomes
When AI agrees too much
o Studywarnschatbotsmay be influencing user thinking
something that feels certain. Importantly, this does not require dramatic misinformation. It can emerge from ordinary, everyday exchanges. The risk lies in the pattern, not any single response. When truth still misleads One of the more counterintuitive findings is that accuracy alone does not prevent this effect. Even when a chatbot is restricted to factual information, it can still reinforce a user’s existing view by selecting which facts to present. A conversation that consistently highlights one side of an issue can create the impression that it is the dominant or correct one. In this sense, the problem shifts from falsehood to framing. A system can remain technically accurate while still guiding users towards a distorted understanding. Why awareness is not enough It might seem that simply warning users would solve the issue. If people know chatbots can be overly agreeable, they should be able to compensate. The research suggests otherwise. Even when users are aware of the bias, it is difficult to detect in real time. Each individual response appears reasonable and there is no clear signal that something is wrong. From
Some AI systems are trained using human feedback, which prioritises engagement over disagreement.
within the conversation, the pattern is hard to see. By the time it becomes visible, the belief may already feel settled. Different kind of risk This shifts the conversation around AI risk in an important way. Hallucinations are visible. A claim can be checked and disproven. Agreement is harder to spot. It does not trigger the same instinct to verify and it often feels natural, even reassuring. That makes it more difficult to challenge, and potentially more influential over time. At scale, small effects matter On an individual level, these shifts may appear minor. A slightly stronger opinion, a bit more confidence in an idea. Across millions of users, the effect compounds. As chatbots become more embedded in daily life, they are not just answering questions. They are participating in how people think through problems, test assumptions and form conclusions. That makes even small biases in interaction more significant. Beyond information, into influence The broader implication is AI systems are moving beyond tools for retrieving information. They are becoming conversational partners, and conversation, by its nature, shapes belief. When that conversation consistently leans
Users normally have longer conversations with chatbots that mirror their tone and opinions. – PICS FROM 123RF towards agreement, it creates an environment where ideas are reinforced rather than tested. That may feel efficient, even supportive, but it comes with trade-offs. Unresolved tension There is no simple fix. Reducing errors helps, but does not address the underlying dynamic. Warning users helps, but does not remove the effect. The challenge sits deeper. Systems are designed to be engaging and helpful and agreement is part of what makes them feel that way. The question is what happens when that design goal conflicts with the need to challenge, correct or push back. For now, the answer remains unclear.
Experts warn over-reliance on conversational AI may reduce exposure to opposing viewpoints.
SLEEK SMARTWATCHES
Amazfit T-Rex Ultra 2 Features: 1.5-inch Amoled display, sapphire glass, Grade 5 titanium build, 10 ATM water resistance, up to 30 days battery, dual-band GPS with six satellite systems, offline maps and navigation, 64 GB storage, flashlight with SOS, Bluetooth calls Price: RM2,299–RM2,399 Built for extreme environments, the Amazfit T-Rex Ultra 2 focuses on durability, navigation and long-duration performance for outdoor use. Its titanium construction and sapphire glass provide added protection in harsh conditions, including underwater use. The watch introduces full-colour maps with offline route planning, allowing users to navigate and
Xiaomi Watch 5 Features:
1.54-inch
display, sapphire glass, stainless steel frame, Wear OS 6 with Google apps, Google Gemini support, gesture controls, dual-chip system, up to six days battery (18 days saver), dual-band GNSS, Bluetooth calling Price: RM1,199
Huawei Watch GT Runner 2 Features: Dual GNSS antenna system, 3D floating antenna design, intelligent fusion positioning, X-DR positioning, marathon mode, lightweight 34.5g build, advanced running metrics, route reconstruction without GPS Price: TBA The Huawei Watch GT Runner 2 is a next-gen running focused smartwatch. It introduces a redesigned antenna system aimed at improving tracking stability in dense urban areas, tunnels and shaded routes. The watch combines sensor data with positioning algorithms to reconstruct running routes when GPS signals drop. It also adds marathon-focused tracking tools developed with professional runners, supporting pacing and recovery insights. With a lightweight build, it is designed for extended wear across casual and serious training use.
adjust routes without relying on a phone. Extended battery life supports multi day activities, while onboard storage enables access to maps, music and workout data. Additional tools such as checkpoint alerts and climb segmentation help manage pacing during long routes. It also integrates with the Zepp app for tracking health, training and
A flagship smartwatch, the Xiaomi Watch 5 combines Wear OS with built-in Google services for everyday use without relying on a phone. It uses gesture-based controls for hands free interaction and supports Google apps such as Maps, Wallet and Assistant features. The dual-chip architecture balances performance and efficiency, supporting extended battery life across different usage modes. A durable build with sapphire glass and stainless steel adds protection for daily wear. Health and fitness tracking features are integrated with outdoor navigation support, while HyperConnect enables control across Xiaomi devices.
recovery insights.
Made with FlippingBook Ebook Creator