26/01/2026

LYFE MONDAY | JAN 26, 2026

FOLLOW

ON INSTAGRAM

22

Malaysian Paper

@thesundaily @t

Real or fake? o Predictions on how AI will shape content, threats, cyberdefense in coming year A SIA Pacific (Apac) is no longer just participating in the global artifical intelligence (AI) race, it is setting the pace. Deepfakes are mainstream technology awareness will continue to grow Companies are becoming

a

and

increasingly discussing the risks of synthetic content and training employees to reduce the likelihood of falling victim to it. As the volume of deepfakes grows, so does the range of formats in which they appear. At the same time, awareness is rising not only within organisations but also among regular users – end consumers encounter fake content more often and better understand the nature of such threats. As a result, deepfakes are becoming a stable element of the security agenda, requiring a systematic approach to training and internal policies. Deepfake quality will improve through better audio and a lowering barrier to entry The visual quality of deepfakes is already high, while realistic audio remains the main area for future growth. At the same time, content generation tools are becoming easier to use – even non-experts can now create a mid-quality deepfake in just a few clicks. As a result, the average quality continues to rise, creation becomes accessible to a far broader audience, and these capabilities will inevitably continue to be leveraged by cybercriminals. Efforts to develop a reliable system for labelling AI-generated content will continue There are still no unified criteria for reliably identifying synthetic content, and current labels are easy to bypass or remove, especially when working with open-source models. For this reason, new technical and regulatory

In Apac, 78% of surveyed professionals use AI at least weekly, compared with 72% globally, highlighting the region’s rapid and widespread adoption of AI in daily workflows. But what truly distinguishes Apac is how AI is taking root: Adoption is rising from the ground up, powered by hyper-connected consumers, massive device penetration and tech savvy younger populations that integrate AI into their daily experiences long before enterprises formally roll it out. This bottom-up momentum, reinforced by robust investment, CEO-led strategies and fast-growing digital markets, is turning Apac into the world’s most dynamic AI proving ground, where “AI frontier” companies are born and where the future of enterprise transformation is emerging first. As AI adoption accelerates across the region, the implications extend far beyond enterprise productivity and customer experience. For cybersecurity leaders, Apac’s position at the field of AI innovation makes it a model and a warning – the same technologies driving business transformation are also redefining how threats are created, automated and deployed. Kaspersky experts outline how the development of AI is reshaping the cybersecurity landscape in 2026, for individual users and for businesses. Large language models (LLMs) are influencing defensive capabilities while simultaneously expanding opportunities for threat actors.

It is increasingly difficult to identify content, visual or otherwise, generated by AI. – 123RFPIC

initiatives aimed at addressing the problem are likely to emerge. Online deepfakes will continue to evolve but remain tools for advanced users. Real-time face and voice swapping technologies are improving, but their setup still requires more advanced technical skills. Wide adoption is unlikely, yet the risks in targeted scenarios will grow – increasing realism and the ability to manipulate video through virtual cameras make such attacks more convincing. Open-weight models will approach top closed in many cybersecurity related tasks, which create more opportunities for misuse. Closed models still offer stricter control mechanisms and safeguards, limiting abuse. However, open source systems are rapidly catching up in functionality and circulate without comparable restrictions. This blurs the difference between

Further advances will reinforce this trend: AI will increasingly support multiple stages of an attack, from preparation and communication to assembling malicious components, probing for vulnerabilities and deploying tools. Attackers will also work to hide signs of AI involvement, making such operations harder to analyse. AI will become a more common tool in security analysis and influence how SOC teams work. Agent-based systems will be able to continuously scan infrastructure, identify vulnerabilities and gather contextual information for investigations, reducing the amount of manual routine work. As a result, specialists will shift from manually searching for data to making decisions based on already-prepared context. In parallel, security tools will transition to natural-language interfaces, enabling prompts instead of complex technical queries.

proprietary models and open-source models both of which can be used efficiently for undesired or malicious purposes. The line between legitimate and fraudulent AI-generated content will become increasingly blurred AI can already produce well-crafted scam emails, convincing visual identities and high-quality phishing pages. At the same time, major brands are adopting synthetic materials in advertising, making AI generated content look familiar and visually “normal”. As a result, distinguishing real from fake will become even more challenging, for users and for automated detection systems. AI will become a cross-chain tool in cyberattacks and be used across most stages of the kill chain Threat actors already employ LLMs to write code, build infrastructure and automate operational tasks.

In Apac, 78% of surveyed professionals use AI at least weekly, compared with 72% globally, highlighting the region’s rapid and widespread adoption of AI in daily workflows.

Made with FlippingBook - professional solution for displaying marketing and sales documents online