•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

An IT developer from the Philippines described a situation that has resonated with software engineers after he said he was fired following four months of work building a project from the user interface to the database. He reported that a private meeting ended with his dismissal after his manager discovered Claude Code, an AI programming tool, and rewrote the entire project in a matter of days.
In his account, the developer said the manager questioned why he had “wasted so much time.” The developer responded that, based on his experience, as requirements become more complex, AI can produce misleading answers. He said the manager did not accept this explanation and instead relied on the AI tool he had just discovered.
The post included images of the project and prompted widespread reactions from the technology community. Many commenters suggested the manager’s approach would eventually fail, arguing that AI-generated code can appear functional at first but break down when requirements grow more complex.
One user wrote that the developer should “just wait a few months,” adding that the manager’s system could collapse due to overreliance on AI-generated code. Another commenter advised the developer not to sever ties, saying they expected the manager to return when problems emerged. The commenter also said they had tested AI-generated code and found it worked until requirements became complex, after which issues “continuously” appeared.
Other programmers drew parallels to machine translation experiences, where they said a high share of outputs require restarting. One industry participant claimed that in about 98% of cases, work must be redone because AI translates incorrectly, and noted that because the work is framed as “editing,” pay can be only a fraction of the original job—while still requiring most of the effort to fix.
The developer also said that additional details he shared afterward raised further concerns about how the manager handled AI-written code. He described the manager making edits directly on live systems without a test environment, without a code repository, and without deployment processes—likening it to “a ticking time bomb.”
Veteran programmers responding to the post characterized the situation as a warning sign for software sustainability, pointing to the absence of version control, testing, and automated deployment—core elements they said are typically needed for reliable development and maintenance.
Commenters largely urged the developer to maintain contact rather than cut ties. Several suggested connecting with the manager on LinkedIn and waiting for the manager to encounter problems maintaining the AI-generated code.
One highly up-voted comment suggested that after four months—when the manager can no longer maintain the “messy code created by AI”—the manager may reach out, at which point the developer should request a higher salary. Another commenter said the salary request should be even higher if the developer ends up cleaning up AI-generated code, arguing that remediation is often less “exciting” than building correctly from the start.
The story was presented by commenters as part of a wider trend in the tech industry: some managers, they said, may lack technical depth and therefore place blind trust in AI tools while underestimating the limits of AI outputs and the fundamental requirements of software engineering.
In closing, the developer thanked the community and said it was reassuring to see that the industry was not as bad as he feared about coding with AI. He added that he would return to “hard work,” and noted that he had also purchased a Claude package.
Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…