•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

Against the backdrop of a surge in Deepfake fraud, cybersecurity expert Ngo Minh Hieu (Hiếu PC) urged users not to rush trusting calls or videos featuring familiar images and voices, and to proactively verify before transferring money or sharing sensitive information.
On May 12, 2026, the Digital Trust in Finance 2026 forum was held at The Ascott Hotel in Hanoi under the theme “Building Digital Trust in Finance in the AI Era.” The event was organized by the Digital Trust Alliance in collaboration with the Cybersecurity Department and High-Tech Crime Prevention and Control, the National Information Security Association, and MoMo Online Mobile Services JSC, under the patronage of the Ministry of Public Security, the State Bank of Vietnam, and the Ministry of Finance.
At the forum, Ngo Minh Hieu warned that the rapid development of AI has enabled a growing number of users to fall victim to high-tech attacks. “What you see with your own eyes and what you hear with your ears is no longer an absolute truth,” he said.
Ngo Minh Hieu explained that AI can imitate faces, voices, expressions, and even recreate video contexts with high realism. He said the danger goes beyond direct loss of money or data, emphasizing that criminals often target victims’ trust and emotions.
“The most dangerous thing is not just the risk of losing money or data, but criminals attacking victims’ trust and confidence. Once trust is gained, they can easily lure users into financial scam scenarios, crypto investments, or demanding them to surrender assets,” he emphasized.
He noted that creating Deepfake content is no longer complicated or expensive. With a portrait photo, criminals can use low-cost software or open-source tools to swap faces, imitate voices, and conduct fake video calls through platforms such as FaceTime, Zalo, WhatsApp, and others.
The expert also warned that users who share too many photos, videos, and personal details on social media may inadvertently provide data to attackers. Even users who actively protect their personal information can still be exposed through images or videos posted by friends or third parties.
He further pointed out that parents who frequently share images of their children online can also create usable data for criminals.
“Users should limit publicly sharing personal information and hide their friends lists to prevent criminals from exploiting real-life connections for attacks,” Ngo Minh Hieu recommended.
At the forum, Ngo Minh Hieu described several Deepfake scams currently common, including impersonation of relatives to borrow money after taking over social media accounts.
He said that after stealing or buying hacked Facebook accounts from data breach sources for only a few hundred thousand dong, criminals review Messenger messages to map victims’ relationships. They then use the account owner’s image and voice to conduct fake video calls to request money, send malicious links, or attempt to seize assets.
“Most victims are deceived because they believe they are speaking with a real relative. They see a familiar face, hear a familiar voice, and let their guard down,” he said.
Deepfake fraud also targets businesses through AI-powered online meetings. Ngo Minh Hieu cited a 2024 Hong Kong incident in which a financial employee was duped into a Zoom Deepfake meeting with fake leadership; after the meeting, the employee transferred $25 million to the scammers.
While Deepfake content can be difficult to detect, the expert said users can look for signs such as mismatched lip movements, unnatural eye contact, overly smooth skin, irregular lighting and shadows, or a distorted or inconsistent jaw.
However, he stressed that the most important step is not trying to determine whether the video is real or fake, but questioning whether the request is reasonable.
“Criminals often create urgency to make victims lose the ability to verify information. They repeatedly request transfers, OTP codes, or immediate actions,” he warned.
He proposed a “five-second rule”: pause for five seconds before responding to any unusual request, then call back using a saved number or verify through another trusted channel. “Just pausing for five seconds can help us avoid losing money, data, and trust,” he emphasized.
For businesses, experts said transactions should not be approved solely via video calls. Large transfers require verification through multiple steps and multiple contact points to reduce the risk of AI-based deception.
They also recommended that companies regularly train staff on Deepfake, social engineering, and evolving cyber threats. In the expert’s view, “the biggest security weakness is always people.”
For KOLs, KOCs, or artists, Ngo Minh Hieu said they should be transparent when applying AI to their content and proactively warn the community if impersonation is detected online.

Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…