While AI enhances user experiences in every aspect of life, it is also fuelling a new wave of cyber scams that target individuals and businesses alike. — Photo courtesy of Kaspersky |
HCM CITY — The rapid proliferation of advanced artificial intelligence (AI) systems has forever changed every aspect of life.
While chatbots and algorithms enhance the user experience, they have also opened personal and corporate lives to new, and darker, social engineering attacks, according to experts.
The 2025 Identity Fraud Report pointed out that a deepfake attempt occurs every five minutes on average.
The World Economic Forum has speculated that 90 per cent of online content might be synthetically generated by 2026.
At first glance one might assume that the biggest deepfake and AI-phishing targets would be celebrities or high-profile figures, but experts warn that the primary targets and objectives remain consistent with those of traditional scams and fraud: individuals, with their personal, banking and payment information, and businesses, as custodians of valuable data and funds.
They highlight three ways AI can already be used by adversaries to acquire data: phishing enhanced by AI and audio and video deepfakes.
Phishing is a type of internet fraud designed to trick victims into inadvertently disclosing credentials or card details that can target both individuals and businesses and may be massive or tailored.
Traditional phishing resources are often poorly written and generic and riddled with errors, but now large language models (LLMs) enable attackers to create personalised, convincing messages and pages with good grammar, flow and structure.
These attacks, often disguised as notifications from banks or trusted service providers, can now appear in multiple languages and even mimic the style of familiar contacts.
Deepfakes are synthetic media where AI convincingly replicates a person’s likeness or voice.
With just a few seconds of a voice recording, AI can generate audio clips in a person’s voice, allowing bad players to create fake voice messages mimicking trusted sources such as friends or family.
They could then use these recordings to target victims.
It is alarming, but entirely possible – attackers could exploit your voice to request urgent financial transfers or sensitive information, taking advantage of personal trust to commit fraud both at personal and corporate levels.
In the case of video deepfakes, threat actors can also use AI tools to create video deep fakes and from a single image.
They can even swap faces in a video, refine AI-generated imperfections and add a realistic voice to the character.
Kaspersky said it has already observed cybercriminals using LLMs to generate content for large-scale phishing and scam attacks.
These attacks often leave distinctive AI-specific artifacts, such as phrases like “As an AI language model…” or “While I can’t do exactly what you want, I can try something similar.”
These and other markers expose fraudulent content generated with LLMs, which enable perpetrators to automate the creation of dozens or even hundreds of phishing and scam web pages with convincing content, making these attacks more plausible.
The company said while AI-driven scams and deepfakes present growing challenges, understanding these risks is an important first step toward addressing them.
There is no need for fear, and fostering awareness and improving cyber literacy are key, it said.
Individuals can protect themselves by staying informed, vigilant and thoughtful in their online interactions, while organisations could take proactive steps to reduce AI-related risks through proven security solutions, enhanced security practices and initiatives, it added. — VNS