Get the latest crypto news, updates, and reports by subscribing to our free newsletter.
Giấy phép số 4978/GP-TTĐT do Sở Thông tin và Truyền thông Hà Nội cấp ngày 14 tháng 10 năm 2019 / Giấy phép SĐ, BS GP ICP số 2107/GP-TTĐT do Sở TTTT Hà Nội cấp ngày 13/7/2022.
© 2026 Index.vn
Stella Laurenzo, director of AMD’s AI group, said in a GitHub report that “Claude cannot be trusted to perform complex technical tasks,” citing months of observations from what she described as a stable, high-complexity environment. She added that “every senior engineer on my team reports similar experiences.”
Laurenzo and her team analyzed 6,852 Claude Code sessions, including 234,760 tool calls and 17,871 “thinking blocks.” The report highlights several behavioral changes that, in her view, indicate the system is not performing as reliably as before.
Stop-hook violations—described as signs that the AI avoids responsibility, stops thinking early, and repeatedly asks for permission rather than acting—rose from zero before March 8 to an average of 10 times per day by the end of the month.
Reduced code reading before edits also declined sharply. Laurenzo’s team reported that the frequency with which Claude read through code before making changes fell from an average of 6.6 reads to just 2 by the end of March. She said this suggests the AI is editing entire files rather than making targeted changes.
In the same period, the report says Claude began rewriting whole files instead of patching only the necessary parts, which Laurenzo characterized as further evidence of “laziness.”
Laurenzo said the timing of the decline aligned with Anthropic’s introduction of “thinking content redaction,” a default setting in Claude Code 2.1.69 that hides the model’s thinking process from users. She argued that users therefore do not know what the model is actually doing as it contemplates a request, and she said the evidence points to a general decline in thinking since the feature was introduced.
For AMD, the consequences are described as immediate. Laurenzo said AMD’s AI compiler workflow is built around Claude Code, with more than 50 simultaneous instances running on a single tool. She said a silent update disrupted the workflow.
“We have moved to another provider delivering higher-quality work, but Claude was once good for us, and we left this note hoping Anthropic would fix their product,” Laurenzo said.
She declined to disclose the new tool the team is using, citing confidentiality agreements.
Laurenzo said the issue is not unique to AMD, adding that other users on Reddit and GitHub have expressed similar concerns. She also referenced prior criticism of Anthropic, including a sudden jump in token usage that pushed users over limits, and the recent leak of Claude Code’s entire source code.
In her message to Anthropic, Laurenzo warned that AI coding work remains early in its development and said Anthropic risks losing its lead if the behavior described in the report continues.
Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…