•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

Anthropic has published a detailed incident analysis, officially confirming that Claude Code experienced a period of degraded capability from March 4 to April 20, lasting 47 consecutive days. The company rejected claims that it deliberately degraded the AI model, saying the quality drop resulted from three separate product-level faults that overlapped and produced degradation that even internal teams could not reproduce during early investigation. All three faults were fully resolved as of April 20.
First fault (March 4): On March 4, Anthropic lowered Claude Code’s default reasoning level from “high” to “medium” to reduce latency. While latency improved for most tasks, users reported noticeable issues and complained. After receiving feedback from many customers, Anthropic reversed the change on April 7, returning all users to the highest reasoning level.
Second fault (March 26): A more severe issue began on March 26, following an update to memory caching. Claude retains reasoning history within a session to remember why decisions were made. Anthropic’s initial approach was to delete older reasoning steps only after a session had been idle for more than an hour to reduce costs. However, a bug caused reasoning deletion to occur after every turn for the rest of the session, not just once. As a result, Claude “forgot” prior reasoning, leading to forgetfulness, repetition, seemingly odd decisions, and faster exhaustion of usage quotas.
Third fault (April 16): On April 16, Anthropic added a directive to limit the length of system prompts. The team later found that the new directive increased verbosity. Although internal tests initially showed no degradation, broader testing identified a 3% drop in code quality. This change was reversed on April 20.
Anthropic said the faults were hard to identify because they affected different user cohorts on different schedules. The resulting quality decline was broad, inconsistent, and appeared random. The company began investigating in early March, but early signals were difficult to separate from normal user variability, and internal systems did not reproduce the issues at first.
The analysis also notes that automated code testing against the latest version surfaced problems that older versions did not, highlighting how the faults’ behavior depended on specific product states and the need for updated tooling.
Anthropic said it has fully resolved all three faults since April 20. The company also pledged several operational improvements, including ensuring more internal staff use public Claude Code versions, improving code-checking tools, and deploying the updated version to customers. It also cited tighter control of system prompts, broader evaluation for changes, and new auditing tools.
As of April 23, Anthropic raised usage limits for all registered users as compensation.

Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…