CodeWithLLM-Updates
-

Cursor has been gradually releasing version 0.46 since February 19th, which radically and controversially changes the AI interaction interface.

Initially, Cursor only had a Chat feature where users could communicate and receive code modification suggestions—requiring users to press an "apply" button. Later, a separate Composer was introduced, capable of simultaneously working with multiple files and suggesting edits. The Composer also introduced an agent mode, where changes are applied first and then a diff is displayed for review, meaning changes are accepted by default.

Over recent months, all three modes found their niches in my workflow, and I believe many Cursor users similarly chose which mode to use based on their specific needs.

🚨 Now they've decided to merge Chat and Composer into a single panel window with a simple dropdown list offering three options (with hotkeys available for only two).

Chat -> Ask
Composer (normal) -> Edit
Composer (agent) -> Agent

The only advantage (available in beta features in settings) is that they now share context. Everything else feels less convenient so far. Particularly frustrating is that you can no longer bind specific models to specific modes—I previously used slower thinking models in Chat and Claude for Composer.

I don't fully understand why this simplification was implemented. They claim it's to avoid confusion, but I never experienced any confusion — I clearly understood when to use each mode. Previously, everyone copied Cursor as the industry leader, but now they seem to be following Windsurf's / Cline's approach.

They've also given the agent the ability to search the internet and use MCP.

Another peculiarity of the Cursor website is the inability to download older versions. I found the cursor-ai-downloads repository, where someone has been collecting all download links since version 0.36.2, for which I'm very grateful! After installation, you'll need to disable the update service (see instructions for your OS).

https://poe.com/blog/introducing-poe-apps

Beyond the Chat Interface

Poe, known for its universal chat interface for interacting with AI, has announced the launch of a new product: Poe Apps.

It is now possible to create applications without writing code, thanks to the Canvas chat tool, App Creator, which is built on top of the recently released Claude 3.7 Sonnet model with the code_edit_tool. Developers also have full control over HTML and JavaScript.

The company showcased several demo apps, including Chibify for transforming photos into 3D anime, MagicErase for removing unwanted objects from images, and MemeGenBattle for comparing memes generated by different language models.

Available at a reduced early access price for a limited time.

My testing revealed that the generated app attempts to retrieve data from LLMs in JSON format to then embed it into the HTML interface, but language models don't always return valid JSON – this is a weakness.

https://www.anthropic.com/news/claude-3-7-sonnet

Anthropic Claude Updates

Claude Code is a tool for programmers that allows developers to delegate tasks to Claude AI directly from the terminal.

Currently only in limited preview. Waitlist.

https://www.youtube.com/watch?v=AJpK3YTTKZ4

Claude 3.7 Sonnet is the first hybrid model of reasoning and regular responses. There are standard and extended reasoning modes. Users can control the budget for reasoning via the API.

Outperforms other models, including the combination of DeepSeek R1 with Claude 3.5 Sonnet, in tests for solving real-world programming tasks.

Cursor has already added the regular and thinking variants. Github Copilot has updates as well.

A new model has been released, it's from Elon Musk - Grok 3. According to their benchmarks, it's currently the best performing LLM available. It's currently free to use on the website http://grok.com/, but there's no API access yet, so none of our plugins or IDEs are updating. By the way, Grok 2 is available in Cursor.

Trea (a Chinese Cursor clone) has finally released a Windows build. It's completely free, but only offers two models: GPT-4o and Claude-3.5-Sonnet. It has a built-in Webview. There are no checkpoints. There's no MCP. And, of course, there's no opt-out for tracking.

Cline has been updated to version 3.4.
In addition to improvements to the checkpoint interface, and additional contexts for git and the terminal, there's now an MCP Marketplace catalog – you can install necessary tools with a single click.

https://github.com/srikanth235/openastra

OpenAstra
- chat-based open source development platform for API discovery and testing.
(project is in the alpha stage)

Instead of traditional tools (Postman / Insomnia / Yaak), it offers interaction with APIs through a chat interface (like ChatGPT). The author believes this simplifies understanding and testing.

The chat supports various LLMs, including GPT-4, Claude, and Llama, and allows the use of OpenAI, Azure OpenAI, or other compatible APIs.

Thanks to Docker, deploying OpenAstra can be done in minutes.

I wrote yesterday that Windsurf added MCP (Model Context Protocol).

The ability to use MCP is also available in Cursor starting from version 0.45. Here's a video on how to set it up:

https://www.youtube.com/watch?v=brhs5DogIf8

The video demonstrates integration with Docker and retrieving video data from YouTube.

Here are some of MCP toos that you can find at https://mcpservers.org/:

  • Web scraping (Puppeteer, Brave Search, Fetch, Jina Reader)
  • Communication (Slack, Bluesky)
  • Productivity (Notion, Apple Shortcuts)
  • Development (GitHub, GitLab, GitTools, Phabricator, Obsidian)
  • Databases (PostgreSQL, Sqlite, MySQL, BigQuery, Mongo, MongoDB)
  • Cloud services (Cloudflare, OpenAI, Kagi, Exa, HuggingFace Spaces)
  • File system (Google Drive, Cloud Storage, Secure file operations)
  • Version control (Version Control)
  • Docker, Kubernetes
  • Other (Sentry, Memory, Google Maps, Cloudflare, CoinCap, MetoroQuery, Windows CLI, Playwright, , Google Search Console, Pandoc, Data Exploration, any-chat-completions-mcp, Minima)

Cursor's journey began by essentially cloning GitHub Copilot's in-editor code completion. They initially called it Copilot++ and steadily improved it, focusing on its ability to understand multiple lines of code.

By mid-2024, the feature was rebranded as Cursor Tab (a nod to its activation method by Tab button 😉) — reflecting gains in speed and accuracy. And just a month ago, they announced a new model called Fusion, which improves support for large files.

Now, competitors are scrambling to catch up.

Starting February 13th, Zed announced in beta Edit Prediction based on the open-source Zeta model.

And Windsurf announced Wave 3, in addition to supporting MCP (Model Context Protocol), also improved their autocomplete model in the editor, calling it Tab-to-jump.

And GitHub Copilot itself doesn't want to be left behind. Currently, the "Next Edit Suggestion" feature is in preview — it needs to be activated in the settings (video).

https://supabase.com/

Supabase is an open-source Firebase alternative with a SQL database. They showcased an example of using their service with bolt.new on their YouTube channel:

https://www.youtube.com/watch?v=GFxOwNiioT0

Lovable also announced improved integration with Supabase:

  • Less related errors
  • Automatically reads edge function logs
  • Customizable signup/login flows

In MS VS Code 1.97
Copilot is already built into the program and is not installed as an extension. Voice input is still a separate extension (VS Code Speech).

The o3-mini model has been added for all plans, even the free one, which has 50 requests/month (news). However, it didn't appear automatically for me; I had to figure out the settings:

In the preview, there's a feature to set context from a Markdown file; here, you need to create it in .github/copilot-instructions.md

Also, people have discovered that you will not be able to name your variables "sexHere" and "sexThere," as Copilot intentionally stops working with code that contains predefined forbidden words (discussion).

https://youtu.be/C95drFKy4ss

The agent awakens

GitHub Copilot now looks like Cursor; they even implemented switching to agent mode in the same place. Here, Composer is called Edit with Copilot and you can also switch models.

Now, it can also accept a screenshot where we show what's wrong. There's also the ability to upload markdown files to provide context.

I see there's a microphone for voice input, which Cursor doesn't have...

Announcement: https://github.blog/news-insights/product-news/github-copilot-the-agent-awakens/

The website https://cursor.directory/

is a directory of .cursorrules templates
for Cursor AI and other ai-ide

covering various topics
(currently, there aren't many templates available).

In the Learn section (learn), video tutorials are compiled on diverse subjects, ranging from a basic introduction to Cursor AI to more advanced development techniques such as Composer and Rules, as well as integration with other platforms and services.

Similar websites include:

On the Aider LLM Leaderboards, for several days now, the first place has been occupied not by a single model, but by the combination of DeepSeek R1 and claude-3-5-sonnet-20241022.

This setup likely inspired the creation of the

DeepClaude 0.1.0
https://deepclaude.com/

Currently, DeepClaude is a very minimalist BYOK (Bring Your Own Keys) open-source system built in Rust - repo. It was created by An Asterisk as a side project and requires users to pay two providers and input their API keys. Interestingly, it’s unclear why the developers didn’t opt to integrate with OpenRouter.ai instead — perhaps someone will code this.

The system can be used on their website or deployed locally. It’s not an IDE or a plugin — just a chat interface powered by two models.

https://www.youtube.com/watch?v=gYLNxUxVomY

Main ideas from the video "I spent 400+ hours in Cursor, here's what I learned":

📂 Create a folder for instructions or prompts.
Use markdown files and comments in the code to describe the project, technology stack, technology, and database structure, rules of behavior, and important instructions. Regularly updating these files helps AI better understand the project context. Cursor does not always parse framework/library documentation well, so it's better to copy them - check the documentation yourself!

Create a file roadmap.md to track the project's status, current goals, and future steps.

Use additional tools, such as Perplexity for searching information and Whisper Flow for voice input. Speaking is much faster than typing! Separately understand your architecture and brainstorm better concepts, this can be done with a "thinking" chat like o1. Do not let AI make big decisions.

⚠️ Danger: Not understanding the code created by AI accumulates "technical debt".

(Video from January 17 regarding r1 and o3) Choosing the right AI model is important - when generating code, Sonnet 3.5 has an advantage, as it provides high accuracy. Queries need to be built in detail: describe the task, add file tags, instructions, and context.

🗣 Chat: For questions, getting information, lists.
✍️ Composer: For making changes to the code, especially complex ones, use Agent mode.

There are tricks like "the less code, the better" or "think like a senior developer", "start with three paragraphs of thoughts", "do not delete comments", to get quality results. Do not overload AI with large instructions, but break tasks into parts.

  • The fewer lines of code, the better
  • Proceed like a Senior Developer
  • DO NOT STOP WORKING until...

add:

  • v0: For quick creation of first design versions. 🎨
  • Claude chat: For consultations, brainstorming. 🤔
  • Lavable/Bolt: For quick creation of MVP backend (with caution, control technical debt).

mentions https://repoprompt.com/

https://github.com/block/goose

Goose 1.0.4

Project Goose - an open-source artificial intelligence agent. Offers both a desktop interface and a command-line interface (CLI). It uses extensions to connect to existing tools and applications, such as GitHub or JetBrains development environment, thanks to MCP. The catalog is still very minimalist.

Goose is multifunctional, supporting various LLM providers and capable of handling a wide range of engineering tasks, including
code migration, project onboarding with programming languages, code refactoring, performance evaluation, code coverage improvement, API framework creation, and modular test generation..

Although it was initially focused on engineering, its open nature encourages the community to explore various applications.

Fortunately, not in Python, but in Rust. It supports macOS and Linux. In Windows, only through WSL ( Windows Subsystem for Linux).

Windsurf Cascade and Cursor added the o3-mini model from OpenAI.

https://www.reddit.com/r/cursor/comments/1ienr72/o3mini_is_out_to_all_cursor_users/?rdt=53528
and https://codeium.com/changelog