02/03/2026

LYFE MONDAY | MAR 2, 2026

FOLLOW

ON INSTAGRAM

22

Malaysian Paper

@thesundaily @t

Being certain of hallucinations o Why AI often sounds confident while packing answers with false information A SK a chatbot a straightforward question and it will usually answer with confidence. The tone From glitch to consequence As generative AI moved into professional settings, the implications became clearer. Ű BY AMEEN HAZIZI

In 2023, a New York lawyer was fined US$5,000 (RM19,450) after submitting fake legal precedents generated by ChatGPT in a case. The chatbot had fabricated citations. The lawyer did not verify them before filing. In Canada, a tribunal ordered Air Canada to honour a bereavement fare policy incorrectly described by its customer service chatbot. The airline argued the bot was a separate legal entity. The tribunal rejected that claim and held the company responsible. Academic research has also felt the effects. Studies have found that language models frequently generate fabricated or inaccurate references. Lecturers and librarians report spending more time checking citations submitted by students. In medical and scientific contexts, incorrect references are more than an inconvenience. They can mislead readers and erode trust. In journalism and public policy, the risk is similar. A persuasive but incorrect answer can circulate widely before it is challenged. Why hallucinations happen There is no single cause. Some errors stem from training data. If datasets are incomplete, inconsistent or contain inaccuracies, models may learn patterns that are not fully grounded in reliable information. In tasks such as summarisation, differences between source material and target output can encourage creative gap-filling. Other issues arise from model design. LLMs are trained to predict the most probable next word. If they lack sufficient information, they still produce a best guess. As responses become longer, small inaccuracies can compound, creating a cascade of misleading claims.

is assured. The structure is clean. The citations look convincing. Then you discover it is wrong. In artificial intelligence (AI), this is known as a hallucination. An AI hallucination refers to a response that contains false or misleading information presented as fact. The term borrows loosely from psychology, but the similarity ends there. Humans hallucinate sensory experiences. AI systems generate constructed answers that appear factual but are not grounded in reality. It is not a rare bug. It is a limitation built into how large language models (LLM) function. Prediction, not understanding LLMs such as ChatGPT are trained to predict the next word in a sequence. They do not understand meaning the way humans do. They analyse patterns across vast datasets and calculate probabilities. When information is missing, they still produce an answer because their training rewards completion and fluency. ChatGPT founder OpenAI has described hallucinations as a tendency to invent facts in moments of uncertainty. Some researchers prefer terms such as fabrication or factual error, arguing that “hallucination” humanises software. Machines are not perceiving false realities. They are generating statistically likely sequences of words. Still, the term has entered public vocabulary. In 2023, the Cambridge Dictionary updated its definition of hallucination to include its AI meaning, reflecting how widely the concept had spread after ChatGPT’s release in late 2022.

AI-generated summaries can introduce details that were never present in the original source text.

Research model interpretability suggests that some systems contain mechanisms that allow them to decline answering when unsure. When those safeguards fail, the model may respond despite lacking adequate information. Hallucinations are not confined to text. Image recognition systems can misidentify objects. Subtle alterations to images can cause AI to misclassify stop signs or everyday scenes. Text-to image tools have produced historically inaccurate depictions, drawing criticism and forcing companies to adjust features. In high-stakes areas such as medical diagnostics, chip design and supply chain logistics, such errors carry serious risks. Can they be reduced? Most researchers agree hallucinations cannot be removed completely. They are part of how these systems are built. Because AI predicts likely words rather than checking facts like a into

that extra time is worth it. In everyday chatbots, users often prefer fast replies, even if they are not perfect. Question of trust Interestingly, generative techniques described as hallucinations have also been used productively in scientific research, such as proposing new protein structures that are later tested in laboratories. In those contexts, outputs are rigorously validated against physical reality. The difference is verification. As generative AI becomes embedded in education, media and governance, expectations must be recalibrated. These systems can assist with drafting, summarising and brainstorming. They are not independent authorities. The more fluent and persuasive they become, the easier it is to overlook their limitations. The challenge is not that machines produce imaginative outputs. It is that they do so with confidence, even when certainty is not warranted.

human would, mistakes are bound to happen. That said, companies are trying to reduce them. One method is simple in concept. Instead of letting the AI rely only on what it learned during training, it can be linked to trusted sources. Before answering, the system searches databases or the web and checks whether the information is supported. This helps ground responses in real, verifiable material. Another method focuses on training. Developers can teach models to admit uncertainty rather than guess. They can reward answers that say “I do not know” when information is unclear. Some systems are also designed to generate multiple possible answers and compare them before deciding on the most reliable one. There are trade-offs. Extra fact checking and cross-checking require more computing power and can slow down responses. In high-stakes fields such as healthcare or engineering,

Different AI models trained on similar data can produce sharply different responses to the same question. – ALL PICS FROM 123RF

A single unchecked AI mistake can multiply as it is repeated and reshared.

Made with FlippingBook - Online Brochure Maker