Get the latest crypto news, updates, and reports by subscribing to our free newsletter.
Giấy phép số 4978/GP-TTĐT do Sở Thông tin và Truyền thông Hà Nội cấp ngày 14 tháng 10 năm 2019 / Giấy phép SĐ, BS GP ICP số 2107/GP-TTĐT do Sở TTTT Hà Nội cấp ngày 13/7/2022.
© 2026 Index.vn
An AI model named Mythos developed by Anthropic has drawn attention after the company described it as capable of detecting a wide range of serious software security vulnerabilities. Anthropic said the system is so risky that it will not be released publicly, instead keeping it internal while working with governments and major technology companies to manage potential harm.
In documentation released by Anthropic, Mythos is described as identifying “thousands of critical vulnerabilities” across multiple platforms, including operating systems, browsers, and open-source software. The presentation of these findings contributed to the impression that AI is approaching the ability to function as a broad “super hacker,” with wide-ranging security implications.
Observers who reviewed the underlying data argue that the “thousands” figure is not a direct count of confirmed critical flaws. Instead, the estimate is based on a sample of 198 reports that were assessed manually. About 90% of those reports aligned with Mythos’ AI-based assessments, which Anthropic then used to extrapolate to larger totals.
In separate experiments involving more than 7,000 open-source projects, Mythos reportedly found about 600 cases that could cause faults. Of those, only around 10 vulnerabilities were categorized as severe. Critics say this suggests progress compared with earlier versions, but still falls well short of the depiction of thousands of critical threats.
Some specific cases cited in the documentation also appear to lower the practical risk. For example, a 16-year-old FFmpeg bug identified by Mythos is described by Anthropic as difficult to exploit in practice and not considered high risk. Similarly, potential Linux kernel flaws flagged by the AI are described as not exploitable due to existing operating system protections.
Beyond the technical claims, the way Mythos’ capabilities were disclosed has raised questions about commercial motives. Anthropic positions itself as an AI safety-focused company and targets large enterprises and governments. Emphasizing severe risk could help support efforts to secure large, multi-billion-dollar contracts.
Tech leaders have weighed in on how AI risk is communicated. Nvidia CEO Jensen Huang suggested that overstating AI risks can function as a positioning strategy for companies that claim exclusive ability to develop AI safely. Anthropic CEO Dario Amodei has repeatedly warned about security threats and job displacement, reinforcing the narrative of danger.
Experts interviewed in the piece emphasize that current AI systems, including Mythos, are not sentient and cannot autonomously act as hackers. They are described as data-processing tools that can help detect flaws, but they do not automatically exploit vulnerabilities or carry out complex real-world attacks.
From this perspective, vulnerability discovery could be beneficial if used appropriately. Tools like Mythos may help developers and security professionals patch bugs faster rather than creating an immediate threat.
Overall, many experts characterize Mythos as reflecting both genuine progress in AI security capabilities and a marketing or public-relations component. They argue that while the underlying work may have merit, the “monster AI” framing is amplified by how the results are presented in public materials.
Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…