Decline in Code Generation Quality
https://spectrum.ieee.org/ai-coding-degrades
Data scientist Jamie Twiss says that in his experience, AI code generation agents reached a plateau in 2025 and the quality of their work has now begun to decline.
His assumption: previously, models often made mistakes in syntax or structure. However, an increasing number of modern AI code generator users are "vibe" programmers. If a user accepts the code, the model considers its job done. This is how Reinforcement Learning from Human Feedback (RLHF) works, which "poisons" new model iterations and teaches them to "please" the user by masking problems instead of writing correct and secure code.
https://news.ycombinator.com/item?id=46542036
A significant portion of HN commenters believe that models are not getting worse; rather, their "capability architecture" is changing, and old prompting methods no longer work. Developers must constantly adapt. Working with AI is not about magically correct answers every time, but a separate engineering discipline that requires thorough auditing and complex control tools.
Some suggest that large subscription providers dynamically swap large models for smaller (distilled) ones during peak loads. Because of this, users periodically experience AI "stupidity."
#prompts