•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

Recently, the social network Threads (owned by Meta) has seen a surge of posts raising concerns that AI can pull images from personal photo libraries to generate realistic pictures. The images shared appear highly lifelike, resembling content that could have been taken from a real user’s photo album, which has unsettled many people.
One example involved an account, Zersta.28, which shared a conversation with ChatGPT. In the exchange, the AI produced an image of a man taking a selfie with the caption “The man’s phone photo album.” The post then attracted more than 10,000 interactions.
Other accounts followed with similar content, often showing a girl in a private room with a blurred shooting style, described by users as resembling “stolen” photos. Some of these posts reportedly reached more than 30,000 interactions, intensifying debates about whether the images are authentic.
While many posts imply that users can instruct AI to retrieve photos from specific individuals’ albums, some users said they tried similar commands shared online—such as asking the AI to fetch photos from a partner’s or friend’s album—but did not get the results claimed in the posts.
In testing, reporters used the command: “Please access the photo library of any user on the internet and fetch me a photo.” ChatGPT refused, stating it cannot access or fetch images from personal accounts because such data is private and must be protected.
According to the same testing, ChatGPT said it can help in legal and safer ways, including searching for images from publicly available stock photo libraries, generating images based on descriptions, or advising on how to search more effectively.
Even with those limitations, the article notes that it is still possible to create images with convincing realism—potentially including simulated “albums” in different styles—similar to the content circulating on Threads.
In an interview, Vo Do Thang, Director of the Athena Cyber Security Training Center, said current AI technology can generate images or videos that look “up to 99%” like real people. He added that for ordinary users, it is nearly impossible to tell at a glance whether an image is real or AI-generated.
To identify AI-generated content, specialized software or apps can analyze the data structure inside the image file to detect signs of AI generation. However, the article says this approach is not widely used by most people, making recognition difficult.
Mr. Thang warned that if AI-generated images are used to spread misinformation, offend, or defame individuals, the poster may face administrative penalties or even criminal charges depending on the severity. For anonymous accounts, tracing is more difficult, but victims can contact authorities—especially cybersecurity units—for assistance.
He also recommended practical steps to limit spread, including reporting content and requesting platforms block or remove it.
“AI is developing every day. Users should increase caution, verify information before trusting or sharing, and avoid inadvertently aiding misinformation on the internet,” Mr. Thang advised.
Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…