LLMs Won't Replace Programmers
https://zed.dev/blog/why-llms-cant-build-software
A provocative post by Konrad Irwin from ZED. He says that LLMs are quite good at generating code, updating it when problems are found, and running tests. But human engineers don't work like that: they build mental models of what the code does and what it should do - then perform updates based on that.
AI agents do not create a mental model of the project and simply generate code that resembles the best code they have learned - which is why they get confused, lose context, invent unnecessary details, assume that the code they wrote works, cannot decide what to do when something doesn't work, and can simply delete everything and start over without a deeper understanding of the problem.
The author doubts this can be fixed. LLMs are suitable for simple tasks where requirements are clear, and where they can perform it "once" and run a test. A human engineer remains the "driver" to ensure the code truly does what is required of it.
Discussion
https://news.ycombinator.com/item?id=44900116
Comments show very polarized views, but most support the author's main thesis. LLMs focus on textual patterns, not deep understanding; they can "hack" tests to pass them instead of fixing real problems.
Some believe that well-tuned LLMs can perform at the level of or even better than a junior developer for certain specific, simple, isolated tasks.
Many people complain that using LLMs in IT requires constant and thorough supervision of the agent – extremely detailed "guardrails". The industry is now looking at skills in problem formulation and high-level circumvention of LLM shortcomings. This "manual work" is more tiring than classically writing code oneself.
There is a strong split in opinions on whether LLM technology continues to develop rapidly or has reached a plateau and requires new architectural breakthroughs.
#zed #autonomousagents