Get the latest crypto news, updates, and reports by subscribing to our free newsletter.
Giấy phép số 4978/GP-TTĐT do Sở Thông tin và Truyền thông Hà Nội cấp ngày 14 tháng 10 năm 2019 / Giấy phép SĐ, BS GP ICP số 2107/GP-TTĐT do Sở TTTT Hà Nội cấp ngày 13/7/2022.
© 2026 Index.vn
Anthropic has announced a new AI model named Mythos, but rather than releasing it broadly, the company says it is sharing the system only with a limited group of large enterprises for cybersecurity reasons. Observers also suggest the decision may serve business interests, noting that the model is described as potentially dangerous to release publicly.
Anthropic says Mythos can identify and exploit security vulnerabilities in software, and that it performs better than its predecessor, Opus. Instead of a public release, access is limited to infrastructure operators such as Amazon Web Services and JPMorgan Chase.
The stated goal is for large enterprises to proactively detect and patch vulnerabilities before criminals can use advanced AI to attack systems.
OpenAI is also reported to be considering a similar approach for an upcoming security tool, aligning with the idea of restricting access to organizations that can help mitigate risk.
Some experts question how much danger Mythos truly poses in practice. Dan Lahav, CEO of AI security firm Irregular, argues that identifying vulnerabilities is only part of the challenge. He says the key issue is whether vulnerabilities can be exploited in ways that cause serious harm—particularly when multiple weaknesses are combined.
Aisle, a security startup, adds that it has achieved similar results using smaller, open-weight models. Its view is that no single deep learning model acts as a universal key to cybersecurity, and that outcomes depend on the specific problem being addressed.
Other observers argue that revenue protection may be a significant factor. David Crawshaw, software engineer and CEO of exe.dev, suggests the cybersecurity framing may also function as marketing cover for the fact that leading models are increasingly tied to enterprise contracts and are no longer accessible to smaller labs.
Crawshaw points to distillation—training a smaller model from a larger one—as a mechanism that can allow companies to monetize access while steering others toward less capable solutions. He says distillation can weaken the business model of top AI firms by reducing the advantage of investing billions in large-scale training.
Whether Mythos—or any new model—poses a real threat to internet safety is not settled. A controlled release is presented as a cautious approach, but the central question remains whether Anthropic is primarily protecting the internet, protecting its profits, or both.
Anthropic has not commented on questions related to distillation.

Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…