Experts warn of deepfake scams

November 10, 2023 - 11:53
The growth of deepfake crime has become a major concern as technology development continues to reshape the social media and traditional media landscapes, experts said.

 

Deepfake has become a nightmare for society as a whole. — Illustrative Photo of Kaspersky

HCM CITY — The growth of deepfake crime has become a major concern as technology development continues to reshape the social media and traditional media landscapes, experts said.

According to Kaspersky research based on darknet forums that cybercriminals frequent online, demand for deepfake content considerably surpasses the supply.

Kaspersky’s experts predict that deepfake scams will increase with a range of more delicate and high-quality techniques. These vary from offering a premium impersonation video with full production service to promising double cryptocurrency payments sent to scammers through broadcasting fake live streams on social media platforms using footage of celebrities.

“Deepfake has become a nightmare for women and society as a whole. Cybercriminals now exploit AI to swap the victim’s faces for pornographic photos and videos as well as in misinformation campaigns. These techniques aim at manipulating the public’s opinion by disseminating false information or even damaging an organization’s or an individual’s reputation. We urge the public to be vigilant against this type of threat,” said Võ Dương Tú Diễm, Territory Manager for Vietnam at Kaspersky.

According to Regula, an information reference system, up to 37 per cent of organisations worldwide have encountered deepfake voice fraud, and 29 per cent were the victims of deepfake videos. Additionally, deepfake is a rising cybersecurity threat in Việt Nam, where cybercriminals often use deepfake video calls to impersonate individuals and ask their relatives and friends for huge loans in urgent situations. Furthermore, a video call deepfake can be conducted in only one minute, so the victims find it difficult to distinguish between genuine and phony calls.

Although cybercriminals have abused AI for malicious purposes, individuals and organisations may still utilize AI in the deepfake detection process to minimise hazards.

Kaspersky shares solutions for users to protect themselves from deepfake scams, including using AI content detection software that utilizes advanced AI algorithms to analyze media and determine whether an image, video, or audio file has been manipulated; and by using an AI-powered watermarking technique.

The AI-powered watermarking technique acts as a copyright symbol to protect the author's AI creations. This technique adds a unique signature to images and can be a powerful weapon against deepfake products as it can help trace the source of the AI-generated tool, it said.

In addition, users need to apply solutions for content provenance traces and video authentication.

Video authentication is a process that specifies that content in a video is authentic and the same when generated. Some emerging techniques use a cryptographic algorithm to insert hashes at set intervals during a video. If the video has been manipulated, the hashes will be altered, it added. — VNS

 

 

 

E-paper