CodeWithLLM-Updates
-

Zed released in version 1.0
https://zed.dev/blog/zed-1-0
Just as Cursor bumped its major version after an interface overhaul, the code editor from the creators of Atom officially reached 1.0 on April 29, 2026. They write: "we've reached a tipping point where most developers can quickly feel at home in Zed."

Built with Rust, it features GPU acceleration, collaborative mode, built-in Git, a debugger, and AI (native and via Agent Client Protocol). Available on macOS, Windows, and Linux. Along with the release, it gained the ability to run multiple agents simultaneously in one window.

Discussion:
https://news.ycombinator.com/item?id=47949027
Many praise the speed, collaboration, native feel, and progress. There is criticism regarding project-specific configuration, AI features (which can be disabled), accessibility, and some minor nuances. Lots of practical feedback from those who switched or tried it.

Warp goes fully open source
https://www.warp.dev/blog/warp-is-now-open-source
On April 28, the AI terminal client Warp became open-source (AGPL for the core code + MIT for the UI framework). Now the community can contribute, including developing agent-first workflows through their cloud agent/orchestrator Oz.

Following the open-sourcing of the Warp client, a popular community fork called OpenWarp (https://openwarp.zerx.dev, zerx-lab) emerged. The project quickly gained popularity. It retains all the familiar Warp functionality (blocks, workflows, speed, UI) but, most importantly, fully opens the AI layer: you can connect any OpenAI-compatible provider (DeepSeek, Qwen, Ollama, OpenRouter, LM Studio, etc.), set custom system prompts via templates, and keep all keys locally without depending on a Warp cloud account or paid plans.

GitHub Copilot moves to usage-based billing
https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/
Starting June 1, 2026, all plans will transition to a usage-based model with GitHub AI Credits (1 credit = $0.01). Code completions remain unlimited, while chat, agents, CLI, and other heavy features consume credits based on tokens.

GitHub explains the transition by stating that Copilot is no longer just the simple autocomplete tool it was a year ago—it now includes powerful agentic workflows, chats, code reviews, and complex agents that consume significantly more compute resources. Fixed subscriptions no longer cover the costs.

Discussion:
https://news.ycombinator.com/item?id=47923357
Many understand the reasons (expensive agents and inference) but complain heavily about the loss of predictability, rising costs for power users, and multipliers for powerful models. Tools are available to estimate future bills.

Vibe with a new model and cloud
https://mistral.ai/news/vibe-remote-agents-mistral-medium-3-5
Mistral introduced a new agentic model, Medium 3.5 (128B, 256k context), and made it the primary model in the Vibe CLI. They also added remote agents that run asynchronously in isolated cloud sandboxes (similar to Codex or Claude Code) for long-running tasks. These can be launched from the CLI or the Le Chat web interface with history synchronization.