AI integration experience in production processes
https://www.sanity.io/blog/first-attempt-will-be-95-garbage
The author used Cursor for 18 months for code generation, and Claude Code for the last 6 weeks. The transition to Claude Code took only hours to understand how to work with it.
Author's mental model: AI is a junior developer who never learns. Costs for Claude Code can be a significant percentage of an engineer's monthly salary ($1000-1500 per month).
Conclusions:
- Rule of three attempts: Forget about perfect code on the first try - 95% will be garbage, which helps the agent identify real tasks and limitations. The second attempt might yield 50% good code, and by the third time, it will most likely implement something that can be iterated and improved.
- Retain knowledge between sessions: update
Claude.md
with architectural decisions, typical code patterns, "gotchas", and links to documentation. And configure MCP to retrieve data from Linear, Notion/Canvas, Git/GitHub history, and others. - Teams and AI agents: uses linear. It's important never to run multiple agents on the same problem space. Explicitly mark correct, human-edited code.
- Code review: agents --> agent overseers --> team. Always check, especially for complex state management, performance-critical, and security-critical sections.
The main problem today is that AI does not learn from mistakes. Solution: better documentation, clearer instructions.
The author emphasizes that giving up "ownership" of the code (since AI wrote it) leads to more objective reviews, faster removal of unsuccessful solutions, and lack of ego during refactoring.
Discussion on HN
https://news.ycombinator.com/item?id=45107962
LLMs have already become a standard tool for engineers. It is confirmed that several iterations with LLMs (3 or more) to achieve an acceptable result are typical. LLMs do not write complex code very well on their own unless it is an extremely simple or templated task. Code written by LLMs always requires careful monitoring and editing.
There is concern that excessive use of LLMs may reduce a developer's own "thinking" skills. If a junior developer simply "vibe" code with an LLM without deep understanding, it undermines the trust of senior colleagues.
Some users create additional "critical agents" (LLMs trained to find errors) to check code written by the main agent. Breaking down complex tasks into small, manageable parts is key to success. Using TDD with LLMs works very well.
#claudecode