14/01/2026
WEDNESDAY | JAN 14, 2026
10
COMMENT by Galvin Lee Kuan Sian
Complement AI Governance Bill A RTIFICIAL intelligence (AI) has quietly moved from novelty to infrastructure. It is now embedded in the safeguards can catch up. Malaysia has already taken an important first step through the National Guidelines on AI Governance and Ethics released in 2024.
Malaysia should pair any formal legislation with plain-language explanations that tell citizens what protections exist, how to report harm and what to expect when a complaint is raised. A modern rulebook is not only a legal text; it is also a public trust document. Education must be part of this conversation. AI literacy cannot be reduced to learning how to use new tools; the deeper requirement is judgement. Students need to understand deepfakes, privacy risks, manipulation tactics and the basics of verifying information. They also need to know how to respond when automated systems make errors. Malaysia’s AI rulebook should also ensure Malaysia is producing AI ready citizens who understand their rights and responsibilities in clear, practical terms. The proposed AI Governance Bill and the wider legislative framework present a rare opportunity. The country can set rules before trust breaks completely. If Malaysia executes it right, the public will see tangible benefits. People will feel less exposed to scams and impersonation. They will know when AI is being used in high- impact decisions. They will have a realistic way to challenge harmful outcomes. Businesses will innovate with greater certainty because boundaries are clearer. AI governance will only sound technical if we keep it in technical language. In truth, it is a national decision about how technology should treat people. Malaysia’s first AI rulebook should be judged by one standard – the standard should be whether ordinary Malaysians can understand it, trust it and use it where it matters. Galvin Lee Kuan Sian is a lecturer and programme coordinator in business at a private college in Malaysia and a PhD candidate and researcher in marketing at Universiti Malaya. Comments: letters@thesundaily.com
tools we use, the content we consume and the decisions that shape our everyday life. Most people do not notice it until something feels off – a suspicious message that sounds almost real, a convincing video clip that spreads too quickly or a decision made about a person with no clear explanation. This is why Malaysia’s
Voluntary guidelines are useful because they signal the values Malaysia wants AI to reflect, such as fairness, transparency, privacy and accountability. They help responsible organisations develop best practices. However, guidelines
have limits; they work best when everyone is already motivated to do the right thing. Law is necessary when incentives pull in the opposite direction – when speed, profit or convenience tempts organisations to cut corners. A practical AI rulebook should deliver three outcomes that ordinary Malaysians can immediately recognise. The first is visibility. When AI impacts them meaningfully, many people will not require technical detail but they will need clarity. If a decision about employment, education opportunities, financial access, insurance or essential services is driven mainly by automation, the public should not be left guessing.
“Malaysia should pair any formal
next policy step matters to the public and not only to technology professionals. Malaysia is preparing its first dedicated AI rulebook. Digital Minister Gobind Singh Deo has stated that the country’s first AI Governance Bill is close to completion and that an AI legislative framework is expected to be presented to the Cabinet in June. For many, that may sound distant and administrative. In reality, it will influence how much Malaysians trust what they see online, how safe they feel using digital services and how fairly they believe institutions treat them. A common misconception is that AI governance exists to slow innovation. A better way
Malaysia’s AI rulebook should ensure Malaysia is producing AI-ready citizens who understand their rights and responsibilities in clear, practical terms. – AI GENERATED IMAGE BY AZURA ABAS
legislation with plain-language explanations that tell citizens what protections
remain responsible for outcomes, including a duty to investigate errors and prevent repeat incidents. The public will not care whether a harmful outcome came from a vendor, a model or an internal tool; they care about who will fix it and how quickly. The third is a realistic path to correction. Trust does not require perfection; it requires recourse. People will tolerate mistakes when there is a fair process to challenge a decision and obtain a timely human review. What damages trust is a feeling of helplessness, where an automated decision cannot be explained and cannot be appealed. This is especially important in high-stakes contexts, where a single wrong outcome can affect someone’s livelihood, education trajectory or personal well-being. Malaysia should also avoid two unhelpful extremes as it develops this framework. One is a law that sounds strong but changes little in practice. If the rules remain vague, people will still be unsure of how to respond when AI affects them. That leads to a familiar cycle of public frustration, social media outrage and institutional damage control. The other extreme is a complex
compliance burden that only large organisations can manage. This would widen inequality in a quiet but significant way. Large firms can hire compliance teams and consultants. Smaller firms often adopt AI with off-the-shelf tools and do not have specialised legal capacity. If safe AI becomes expensive and confusing, good actors may disengage and bad actors may ignore the rules entirely. Effective governance should make responsible behaviour easier, not harder. A balanced approach is possible and it starts with a simple principle. Not all AI usage carry the same level of risks. A system that recommends songs or summarises content is not the same as one that influences hiring, loans, scholarships or public services. The rules should be stricter when the stakes are higher and harm is more likely and harder to reverse. A risk based approach will protect the public while leaving room for innovation in lower-stakes applications. Communication will matter as much as legislation. If AI rules are only readable by lawyers and specialists, they will not build public confidence.
exist, how to report harm and what to expect when a complaint is raised.
Basic disclosure will reduce anxiety and rumours. It will also encourage organisations to be more careful about how these systems are deployed. The second is accountability, which cannot be outsourced. Malaysians should not accept a closed-door response when an AI driven process causes harm. The organisation that deploys AI should
to understand it is this: innovation cannot scale without trust – when trust collapses, consumers hesitate, businesses become cautious, institutions lose credibility and scammers and malicious actors fill the gap. The real economic risk is not that Malaysia is setting rules too early; the greater risk is that Malaysia’s public confidence will erode faster than
Made with FlippingBook Annual report maker