29/06/2025
ON SUNDAY JUNE 29, 2025 theSunday Special V
Startups and SMEs are particularly vulnerable because they often lack the resources to invest in robust cybersecu rity infrastructure. With fewer layers of internal checks and balances, scammers ¿QG H[SORLWLQJ WKHVH RUJDQLVDWLRQV HDVLHU Why traditional security isn’t enough Traditional security measures like antivi UXV VRIWZDUH ¿UHZDOOV DQG HYHQ VWDQGDUG SKLVKLQJ ¿OWHUV DUH QR ORQJHU VẊ FLHQW to stop AI-driven attacks. These scams don’t always rely on malware; instead, they exploit human psychology and social engineering in unprecedented ways. AI-generated phishing emails, for example, no longer contain the obvious typos or strange formatting that used to be UHG ÀDJV 7KH\ DUH JUDPPDWLFDOO\ SHUIHFW culturally aware and can even include references to recent events or meetings scraped from social media. This is why experts call for a paradigm shift in our approach to cybersecurity. It’s no longer just about protecting sys tems, but also about preparing people to question even the most familiar digital interactions. How to stay ahead In the face of such high-tech deception, human vigilance remains critical. Here are some steps professionals and companies can take: • Verify through multiple channels: $OZD\V FRQ¿UP VHQVLWLYH UHTXHVWV XV ing a secondary method, like a direct phone call or face-to-face check. • Strengthen authentication: Use two-factor authentication for all work related platforms. • Educate your teams: Regular training on digital hygiene and scam recognition can help prevent lapses. • Invest in AI detection tools: Enterprise-grade cybersecurity solu WLRQV QRZ R̆ HU $, DQRPDO\ GHWHFWLRQ WR ÀDJ VXVSLFLRXV FRPPXQLFDWLRQV • Limit public exposure: Be cautious about sharing audio or video clips on public platforms, as they can be harvested to train voice or face models. Additionally, companies should estab lish clear internal protocols for handling payments, data sharing and account access requests. This helps create a culture RI YHUL¿FDWLRQ DQG DFFRXQWDELOLW\ Awareness is no longer optional AI scams are no longer a distant possibil ity, they’re today’s reality. With high-level impersonations and seamless forgeries, scammers exploit the tools we use to drive innovation. Malaysia’s digital economy is growing and with it, there is a need for stronger digital resilience. Urban professionals and corporate leaders must stay informed and vigilant. The more we understand the threats, the better we can defend against them. Awareness is no longer optional. In this new digital age, being too trusting can cost more than money; it can cost your credibility, data and business. The next time you receive a message that seems too urgent or a call that feels VOLJKWO\ R̆ WDNH D PRPHQW 9HULI\ TXHV tion and when in doubt, don’t click, don’t send and don’t trust too easily.
AI scams: More dangerous than you thought A RTIFICIAL Intelligence (AI) has revolutionised how we work, shop and communi cate. But as AI tools become more sophisticated, so do the threats they pose, especially in the hands of cybercriminals. AI scams are evolving fast and many Malaysians, especially urban professionals, may not realise how vulnerable they are. The rise of deepfake deceptions
The scammer’s toolkit AI tools are not just powerful, they’re increasingly easy to use. Some com mon tactics include: • Voice cloning: Tools like Elev enLabs can generate near-perfect voice replicas using just a few seconds of audio. • AI chat mimicry: Scammers use chatbots that mirror your texting style to impersonate you in real-time. • Deepfake video: Open-source tools allow real-time face-swap ping, making fake video calls dangerously convincing. Most of these tools are cheap or free online, putting them within reach of opportunistic fraudsters. Who’s most at risk? Corporate professionals and decision makers are top targets due to the value of WKHLU DFFHVV DQG LQÀXHQFH (PDLOV YRLFH notes or calls that appear to come from a superior often bypass suspicion. As quoted by Yeo Siang Tiong, the general manager for Southeast Asia at Kaspersky, in various media outlets, he highlighted that cybercriminals know that H[HFXWLYHV DQG ¿QDQFH WHDPV PDNH IDVW decisions under pressure. Therefore, so cial engineering, powered by AI, becomes a powerful weapon. T KHUH ZDV D FDVH ZKHUH D ¿QDQFH PDQ ager in Singapore transferred nearly SGD 700,000 after receiving instructions in a group chat from what appeared to be her CEO. The tone, structure and even grammar matched his usual messages. Only later did she realise an AI-mimicked identity had duped her.
BY ASHRAF WAHAB
someone sounds and looks like a trusted ¿JXUH 7KH GDQJHU LV DPSOL¿HG LQ KLJK pressure scenarios involving finances,
FRQWUDFWV RU FRQ¿GHQWLDO GDWD Malaysia is not immune
Back home, a Malaysian HR executive narrowly avoided a data leak when she received a voice note on WhatsApp that sounded identical to her CEO. The mes sage requested urgent employee records and the voice was spot on, down to the cadence and accent. Fortunately, she verified the request by phone, only to discover it was a scam. Cybersecurity professionals in Malaysia are sounding the alarm. According to a 2023 report by CyberSecurity Malaysia, AI-generated scams and phishing attacks have risen by 34% year-on-year. This includes voice cloning, deepfake videos and AI-generated phishing emails that are alarmingly convincing. Scammers are also leveraging local cultural context to appear more authentic. They use Bahasa Malaysia, local slang and regional dialects to make their messages more believable. Scammers sometimes mimic Malaysian government agencies RU EDQNV XVLQJ $, WR JHQHUDWH Ṙ FLDO sounding communication.
In early 2024, a multinational firm in Hong Kong was tricked into transferring USD 25 million after participating in what appeared to be a video call with their CFO and other executives. The shocking twist? Every participant was a deepfake, generated using AI to mimic faces, voices and even mannerisms in real-time. “This incident shows how attackers can exploit trust in visual and verbal cues to commit fraud,” said Paul Ducklin, a principal research scientist at Sophos, in a statement following the case. “What was an outlandish concept is now a real threat.” This kind of scam is especially danger ous because it circumvents traditional verification methods. People are less likely to question their instructions when
Made with FlippingBook Learn more on our blog