CodeWithLLM-Updates
-

Why LLMs can't really build software
https://news.ycombinator.com/item?id=44900116
More than 500 comments. Central idea: LLMs do not have an abstract "mental model" of how to create something — they only work with text. They do not "understand" code, but only mimic its writing. Many commentators emphasize that the most valuable part of their work is what happens before writing code. 95% of the work is identifying non-obvious dependencies, hidden requirements, or potential problems at the intersection of business logic and technology.

Participants in the discussion agree that LLMs can be useful, but only as a tool in the hands of an experienced specialist, because the main responsibility and control always remain with a human. Unlike traditional tools, LLMs are non-deterministic, which makes them unreliable for complex tasks. Often, fixing errors in such projects takes more time than writing the code manually.


AI Coding Sucks
https://www.youtube.com/watch?v=0ZUkQF6boNg
Already a fairly well-known video where the developer from Coding Garden curses what programming has become. As a result of his frustration, he decided to take a month-long break from all AI tools to rediscover the joy of his work.

The key reason for his dissatisfaction lies in the fundamental difference between programming and working with AI. Programming is a logical, predictable, and knowable system where the same actions always lead to the same result. AI, on the other hand, is unpredictable.

He used to enjoy programming thanks to the sense of achievement after solving a complex problem or fixing a bug, "oh, how capable I am." Now his work has turned into a constant argument with large language models (LLMs), which often generate not what is needed.

The same query to the model can give a different answer each time. This lack of stability makes it impossible to create reliable workflows and contradicts the very nature of programming. It deprives the joy of the process, replacing it with irritation.

He lists numerous advanced methods he tried to apply to make AI more manageable: creating detailed instruction files, step-by-step task planning, using agents, and forcing AI to write tests for self-verification. But the models still ignore rules, bypass problems (e.g., removing failing tests), and do not provide reliable results.

In the end, the author refutes the idea that developers who don't use AI are "falling behind," because these tools can be learned quickly, while fundamental skills are more important and are acquired slowly with experience.

He advises beginners to learn programming without AI.

https://www.youtube.com/watch?v=0ZUkQF6boNg

Comments under the video demonstrate agreement with the author. Many developers felt relief seeing that their frustration is a widespread phenomenon, not a personal problem. Developers compare working with AI to managing an overly confident but incompetent junior specialist. Such an "assistant" assures that he understood everything but actually doesn't listen and does whatever he wants. Corrected errors reappear, the tool ignores given rules, and its code cannot be relied upon.

Many commentators express concern that beginners who rely on AI will never learn to program properly. This is compared to mindlessly copying code from Stack Overflow, but on a worse scale. Beginners do not develop fundamental problem-solving skills, which in the long run makes them weaker specialists.