Society
![]() |
| A representative of the National Cybersecurity Association outlines the current state of Việt Nam’s cybersecurity workforce at the International Conference and Exhibition on Cyber Safety in May. — VNA/VNS Photos |
HÀ NỘI — Online fraud has expanded far beyond small, opportunistic tricks, evolving into an industrial-scale operation that exploits human psychology and rapidly advancing technologies.
Despite years of public warnings, users across Việt Nam continue to fall victim to these operations.
Psychologists and cybersecurity specialists say the core problem lies not in technical sophistication, but in instinctive human tendencies: curiosity, fear and the impulse to share information before verifying it.
As Việt Nam accelerates its digital transformation, cyberspace has become broader and significantly more complex.
Dr Sreenivas Tirumala, senior lecturer in Information Technology at RMIT University Việt Nam, said the digital environment is a “double-edged sword”, noting that rising capability has fuelled a surge in high-tech scams.
A recent Viettel Cyber Security report illustrates the scale of the threat.
In the third quarter of 2025, nearly 4,000 fraudulent domains and 877 impersonating websites were detected. Some 6.5 million user accounts were stolen – a 64 per cent rise from the previous quarter.
Yet the underlying strategies remain familiar. Most scams still begin by triggering curiosity with sensational posts, enticing investment pitches or manipulated deepfake videos. These stimuli provoke immediate reactions by activating the brain’s emotional alarm system, prompting action before rational judgement is applied.
Dr Tirumala said that users often click immediately out of fear of missing an opportunity or getting into trouble, allowing that impulse to override the instinct to verify.
Cybercriminals exploit this vulnerability by cloning websites and automating thousands of fake platforms that harvest logins, lure users into downloading apps or steal personal data. Advances in AI tools have made these operations fast, cheap and easily replicated.
Many users are also attracted to seemingly free apps, such as AI photo editors or discount code generators. Specialists warn that such tools often act as gateways to an underground data economy, where stolen credentials are traded. Once compromised, victims can lose control of their accounts, finances or contacts.
Another powerful psychological driver is negative bias – humanity’s natural tendency to pay more attention to alarming information.
Professional communications lecturer at RMIT Lương Vân Lam said this bias, once vital for survival, now draws young people towards shocking content that they then share quickly, becoming inadvertent spreaders of misinformation.
Fear, confusion and herd mentality
If curiosity opens the door, fear often seals the trap. Police agencies say scammers commonly use threatening messages, warnings of account locks, overdue bills or investigations, to push victims into panic. In such moments, the brain prioritises immediate action over critical thought.
RMIT psychology lecturer Dr Gordon Ingram noted that there is a growing impact from secondary trauma caused by exposure to violent or disturbing online content. Anxiety, confusion and sleep disruption weaken judgement, particularly among the young. Social media algorithms worsen the issue by repeatedly serving similar content, heightening fear and vulnerability.
Dr Ingram explained that young people are more vulnerable because they lack experience in processing shocking material.
![]() |
| Pupils join the Vietnam News Agency’s 'Say no to fake news' programme in Đà Nẵng, where 100 fifth grade students learn how to avoid social media scams. |
“Combined with real-life pressures, a single exposure to negative content can trigger layered anxiety,” RMIT psychology lecturer Vũ Bích Phượng told Tin Tức (News) newspaper.
In this unsettled state, users are more likely to accept suspicious calls, transfer money or download dubious apps.
Herd mentality further amplifies risks. The widespread urge to share dramatic news quickly often sidelines verification. Many believe they are helping others by sharing, yet they inadvertently amplify falsehoods. This relentless flow of unfiltered content leaves users fatigued, overwhelmed and easier to manipulate.
Old tricks, new tech threats
New technologies have strengthened longstanding scams.
According to the Ministry of Public Security’s Department of Cybersecurity and High-Tech Crime Prevention, Việt Nam recorded more than 1,500 online fraud cases in the first eight months of this year, with losses of around VNĐ1.66 trillion (US$63 million).
Authorities also detected 4,532 malicious domains, nearly 90 per cent more than the previous year.
Deepfakes, AI-generated voices and fabricated videos have become core tools for criminal networks. Clues that once exposed scams, including spelling mistakes, awkward design or unnatural voices are now far less visible. Calls impersonating relatives seeking urgent loans, messages mimicking banks or videos echoing celebrities are increasingly convincing.
Fraud campaigns now operate like industrial systems. The National Cybersecurity Association (NCA) reports that many groups follow assembly line processes like gathering data, scripting calls, producing deepfake clips and distributing them across platforms. As online shopping and digital payments surge, so do opportunities for impersonation, from fake e-commerce staff to shock-sale refund scams.
A polished interface or urgent notification is often enough to deceive. Modern technology has not reinvented scams; it has made them more believable at scale.
Beneath the visible scams lies a structured underground economy. A recent Kaspersky report on the dark web labour market shows recruitment for illicit cyber work rising sharply. Job postings and applications on underground forums doubled in early 2024, with activity remaining high into 2025. The average applicant was 24 years old.
Fifty-five per cent of applicants expressed willingness to do anything for money, including programming malware, building phishing pages or managing fraud operations. The most sought-after roles include developers of attack tools, penetration testers, money launderers and groups specialising in stealing payment data.
Reverse engineers can earn $5,000 per month, penetration testers around $4,000 and programmers around $2,000. Those handling victim interaction or money transfers earn a percentage of stolen funds. With high returns and relatively low risk, criminals continue to recycle familiar scam formats on a much larger scale.
According to Vũ Duy Hiền from the NCA, leaked datasets – emails, phone numbers and customer lists – sold openly on the dark web allow criminals to launch mass attacks without sophisticated techniques.
Kaspersky expert Alexandra Fedosimova warned that the influx of young workers could escalate future fraud, noting that the greater the number of personnel these groups attract, the more mass-produced scam content they are able to generate.
Despite changing technologies, cybersecurity experts say most scam attempts still revolve around three clear warning signs that require no technical skill to spot.
The first is impersonation. Scammers mimic the appearance of trusted organisations, copy logos or exploit compromised accounts. A single typo in a domain name or unusual phrasing can indicate a threat.
The second is manufactured urgency. Threats of locked accounts, overdue bills or legal summons are designed to elicit immediate action.
The third is convenience-based manipulation: fake QR-payment links, 'fast login' pages or prompts to install apps from unfamiliar sources.
![]() |
| Online scams now span fake websites and apps, spoofed messages, social media cons and fraudulent calls — ensnaring older people, university students and even children. |
Strengthening digital self-defence
Experts stress that recognising warning signs is only the first step. The NCA recommends the “three No’s – three fasts” approach: no absolute trust, no installation of unfamiliar apps and no money transfers without verification; fast checking, fast disconnection when manipulation is suspected and fast reporting to authorities.
Effective defence requires combined legal safeguards, technological measures and user awareness. Multi-factor authentication, transaction alerts and link-checking tools help, but cannot replace vigilance.
In workplaces, Dr Jeff Nijsse, senior lecturer in Software Engineering at RMIT, advocates a “zero trust” culture, treating all links and files as suspicious until verified. Cross-checking procedures and rapid IT support are essential, particularly as malware often hides in PDF and Word files.
In education, RMIT specialists urge stronger digital mental health training to help students understand harmful content and recognise secondary trauma. Experts agree that skill building is more effective than blanket restrictions.
Digital safety, however, must be a shared responsibility. Social media platforms need quicker removal of impersonation accounts; e-commerce platforms must strengthen warnings about off-platform transactions; banks must advance authentication systems; and regulators must intensify action against fraudulent websites and spam. — VNS