CodeWithLLM-Updates
-

Decline in Code Generation Quality
https://spectrum.ieee.org/ai-coding-degrades
Data scientist Jamie Twiss says that in his experience, AI code generation agents reached a plateau in 2025 and the quality of their work has now begun to decline.

His assumption: previously, models often made mistakes in syntax or structure. However, an increasing number of modern AI code generator users are "vibe" programmers. If a user accepts the code, the model considers its job done. This is how Reinforcement Learning from Human Feedback (RLHF) works, which "poisons" new model iterations and teaches them to "please" the user by masking problems instead of writing correct and secure code.

https://news.ycombinator.com/item?id=46542036
A significant portion of HN commenters believe that models are not getting worse; rather, their "capability architecture" is changing, and old prompting methods no longer work. Developers must constantly adapt. Working with AI is not about magically correct answers every time, but a separate engineering discipline that requires thorough auditing and complex control tools.

Some suggest that large subscription providers dynamically swap large models for smaller (distilled) ones during peak loads. Because of this, users periodically experience AI "stupidity."

Claude Max in Opencode
https://github.com/anomalyco/opencode/issues/7410
Opencode users have started reporting notifications about the inability to use the Claude subscription plan for $200/month. Some thought it was an account-specific issue, but it seems Anthropic has decided to crack down on third-party CLIs.

One commenter noted that using the "Max" plan within Opencode was never officially authorized by Anthropic, so the block was only a matter of time.

Many developers in the comments state they have cancelled their paid Claude subscriptions because the service is not suitable for them without OpenCode. Some users are switching to "Zen," an internal service from the OpenCode developers that works via API, where payment is based on actual token usage rather than a fixed monthly fee.

https://news.ycombinator.com/item?id=46549823
Many developers on HN claim that OpenCode has recently become technically superior to Anthropic's official tool. The community believes the blocking decision is tied to telemetry: by using the official Claude Code CLI, you agree by default (or via manipulative UI) to provide Anthropic with data on how you accept or reject code. This is invaluable for training future models. Third-party clients like OpenCode "steal" this data from Anthropic.

Later, it was revealed that the "blocking" was quite primitive: OpenCode simply mimicked official behavior by sending a system prompt: "You are Claude Code, the official CLI from Anthropic." The community has already found "workarounds": changing tool names (e.g., using different capitalization) or updating plugins restores functionality for the time being.

https://github.com/anomalyco/opencode/releases/tag/v1.1.11
Opencode has added support for OpenAI's Codex pricing plan authentication.