•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

On May 12, cybersecurity expert Ngô Minh Hiếu (Hiếu PC) took part in a panel discussion titled Digital Trust in Finance 2026 at a forum in Hanoi focused on building digital financial trust in the AI era. He said AI is bringing major benefits to daily life, from entertainment and science to technology, but that convenience comes with a risk: personal data.
Hiếu PC noted that many users upload highly sensitive information to AI chatbots, including personal and family photos and internal corporate documents. He said that more than 90% of users likely never read the terms of service of these platforms, adding that the documents are often too long, difficult to understand, and frequently written entirely in English.
The expert stressed that society is increasingly dependent on foreign technology platforms. He said the “unwritten rule” of the digital world is that when a service is completely free, user data is monetized. According to Hiếu PC, information provided by users can be collected and used to train AI models, making them smarter over time.
He also warned that the widespread habit of using free tools is being exploited by cyber criminals. He described how hackers have been releasing free AI-generated deepfake applications that transform real images into styles such as cartoons or “Hong Kong style” to attract public curiosity. In these cases, users may grant access to their entire photo libraries and personal information without fully considering the consequences.
Hiếu PC said turning AI into a safer tool starts with slowing down and being more cautious. He advised users not to upload personal information or private photos to chatbots or unfamiliar applications, and to ask: who is behind the application, and why is it offered for free?
Beyond vigilance, he pointed to specific technical settings that can help protect privacy on AI platforms:
Hiếu PC highlighted what he called a small but dangerous vulnerability: very few people enable two-factor authentication for AI chatbot apps such as ChatGPT or Gemini. He said the consequences of weak security can be severe, because one of the fastest ways for hackers to compromise an individual is to breach an AI account that the person uses frequently.
After gaining access, he said attackers may issue a simple command such as: “tell me everything you know about me.” Because AI can retain the full history of interactions, it can summarize and extract a detailed profile of the account holder. He urged users to test this themselves to understand how much information AI can reveal.
Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…