12/01/2026

LYFE MONDAY | JAN 12, 2026

FOLLOW

ON INSTAGRAM

22

Malaysian Paper

@thesundaily @t

AI-voidance common in workplace

o Many workers fear losing jobs, being judged by supervisors

Fear of job loss, career impact One of the strongest reasons people avoid AI tools is fear that the technology could replace their jobs. A 2025 global survey of adults in Malaysia, part of a larger Ipsos AI Monitor study, found that 63% of adults believe AI will replace their current job in the next five years, even as 54% think AI could improve their work. This mixed feeling increases anxiety and makes some workers wary of adopting the tools daily. Similarly, in a separate

Ű BY MARK MATHEN VICTOR

A RTIFICIAL intelligence (AI) tools are now part of many workplaces, as they have a proven track record in speeding up tasks, aiding data analyses and automating routine work. Yet a surprising number of people avoid using them daily. Research from surveys and industry reports shows several common reasons behind this hesitation. A 2024 employer study in Malaysia found that one in three workers have never used AI tools at work and another 10% have only tried them once. This suggests a large segment of the workforce still avoids these tools altogether. Though younger workers such as Gen Z and millennials adopt AI more often, the older generations lag behind with 42% of Gen Xers and 73% of Baby Boomers reporting no AI use in their jobs.

A lack of clear guidelines or directives keep AI adoption low and workers anxious. – PICS FROM FREEPIK

might

expose

confidential

policies. Another global study showed that many employees have used AI in ways that go against workplace rules, as nearly half admitted to uploading sensitive information to public AI tools and a majority had used AI without knowing if it was allowed. Trust, privacy, ethical concerns Trust issues also hold people back. In Malaysia, a Workday report found that while many employees are comfortable using AI as a support tool, only about 23% are okay with an AI agent managing them. Workers cited ethical risks such as bias, discrimination and misuse of AI as key concerns. There is also fear that AI could reduce critical thinking or diminish quality interactions in the workplace. Privacy and data security worries contribute too. Without clear corporate guidelines, employees fear that using AI with company data

Microsoft/LinkedIn survey across 31 countries, about 52% of workers were reluctant to admit using AI for complex tasks because they feared it might harm their career development. Additionally, there are worries that supervisors could view AI use as a sign they do not need as much skill or human contribution. Lack of training or clear guidance Even when organisations introduce AI tools, workers may avoid them because they were not trained properly. A survey of workplaces found that many employees use AI tools without any formal guidance or company policies, leading to confusion and risk. In one report, only about one third of workers said their organisations had clear policies for using generative AI tools, leaving the majority without direction. Without training or rules, employees fear making mistakes, exposing sensitive data or violating

information,

which

can

lead

to

sanctions,

breaches

or

From mundane to more complex problem-solving, AI has sped up work by leaps and bounds.

compliance issues. Beyond technical or economic reasons, social attitudes matter. Some workers avoid AI because they think peers will judge them for it or because they fear being seen as over-reliant on machines in place of human skill. A survey in the UK found that many workers deliberately avoid discussing their AI use with managers or colleagues out of concern it might reflect poorly on them. As such, people still avoid AI tools in daily work for clear, verified reasons, from fear of job loss, lack of training, unclear policies, privacy and ethical concerns, to social stigma. Organisations that want higher adoption rates need to address these issues directly with training, transparent guidelines and open dialogue so workers feel comfortable using AI safely and confidently.

AI in the workplace sees reactions ranging from acceptance to scorn.

How criminals manage, use stolen data following phishing attacks NOT everyone uses protective solutions on their devices and phishing remains one of the most prevalent cyber threats, with attackers luring users to fake websites where they unwittingly surrender their login credentials, personal information or bank card details. stolen in phishing attacks, highlighting how cybercriminals use this data on underground markets. The analysis uncovers the tools and processes used to collect, verify and monetise stolen credentials, personal details and financial data, often priced on dark web forums at US$50 (RM204) or less for bulk sales. Higher-value accounts fetch premium prices: cryptocurrency platforms average US$105, banking accounts US$350, e-government portals US$82.50 and personal documents US$15. Data is

emphasising the enduring risks to victims years after the initial breach. According to Kaspersky’s findings, a staggering 88.5% of phishing attacks targeted online account credentials, 9.5% were focused on personal data such as names, addresses and dates of birth, and 2% were aimed at bank card information. Once captured, these personal details are funnelled through specialised automated systems which help to manage large amounts of data. These systems are offered as a platform-as-a-service and are either created by the attackers themselves or based on legitimate frameworks for creating websites or apps. According to Kaspersky Digital Footprint Intelligence, attackers consolidate stolen data into “dumps” – large batches of verified information –

meticulously verified using scripts to check its validity across services and is then combined into comprehensive “digital dossiers” that enhance its worth for targeted attacks, such as whaling schemes against high-profile individuals. “Stolen data evolves into a persistent weapon for cybercriminals. By leveraging open-source intelligence and old breach data, attackers can craft highly personalised scams, turning one-time victims into long-term targets for identity theft, blackmail or financial fraud,” said Kaspersky security expert Olga Altukhova. To mitigate these risks, users are recommended to: 0 Block compromised bank cards by

Over 117 million phishing links were clicked in the Asia Pacific region in 2025 from November 2024 to October 2025 – all of which were detected and blocked by Kaspersky solutions. Kaspersky experts traced the data

An example of an administration panel through which stolen data is managed.

contacting your financial institution. 0 Change passwords across accounts that are suspected of compromise using unique combinations and enable multi-factor authentication wherever possible.

0 Review in messaging apps, online banking and other services. 0 Utilise trusted security solutions to protect your devices and monitor for data leaks. active sessions

Phishing attacks mostly target online account credentials.

Made with FlippingBook - Online magazine maker