CodeWithLLM-Updates
-

I wrote yesterday that Windsurf added MCP (Model Context Protocol).

The ability to use MCP is also available in Cursor starting from version 0.45. Here's a video on how to set it up:

https://www.youtube.com/watch?v=brhs5DogIf8

The video demonstrates integration with Docker and retrieving video data from YouTube.

Here are some of MCP toos that you can find at https://mcpservers.org/:

  • Web scraping (Puppeteer, Brave Search, Fetch, Jina Reader)
  • Communication (Slack, Bluesky)
  • Productivity (Notion, Apple Shortcuts)
  • Development (GitHub, GitLab, GitTools, Phabricator, Obsidian)
  • Databases (PostgreSQL, Sqlite, MySQL, BigQuery, Mongo, MongoDB)
  • Cloud services (Cloudflare, OpenAI, Kagi, Exa, HuggingFace Spaces)
  • File system (Google Drive, Cloud Storage, Secure file operations)
  • Version control (Version Control)
  • Docker, Kubernetes
  • Other (Sentry, Memory, Google Maps, Cloudflare, CoinCap, MetoroQuery, Windows CLI, Playwright, , Google Search Console, Pandoc, Data Exploration, any-chat-completions-mcp, Minima)

Cursor's journey began by essentially cloning GitHub Copilot's in-editor code completion. They initially called it Copilot++ and steadily improved it, focusing on its ability to understand multiple lines of code.

By mid-2024, the feature was rebranded as Cursor Tab (a nod to its activation method by Tab button 😉) — reflecting gains in speed and accuracy. And just a month ago, they announced a new model called Fusion, which improves support for large files.

Now, competitors are scrambling to catch up.

Starting February 13th, Zed announced in beta Edit Prediction based on the open-source Zeta model.

And Windsurf announced Wave 3, in addition to supporting MCP (Model Context Protocol), also improved their autocomplete model in the editor, calling it Tab-to-jump.

And GitHub Copilot itself doesn't want to be left behind. Currently, the "Next Edit Suggestion" feature is in preview — it needs to be activated in the settings (video).

https://supabase.com/

Supabase is an open-source Firebase alternative with a SQL database. They showcased an example of using their service with bolt.new on their YouTube channel:

https://www.youtube.com/watch?v=GFxOwNiioT0

Lovable also announced improved integration with Supabase:

  • Less related errors
  • Automatically reads edge function logs
  • Customizable signup/login flows

In MS VS Code 1.97
Copilot is already built into the program and is not installed as an extension. Voice input is still a separate extension (VS Code Speech).

The o3-mini model has been added for all plans, even the free one, which has 50 requests/month (news). However, it didn't appear automatically for me; I had to figure out the settings:

In the preview, there's a feature to set context from a Markdown file; here, you need to create it in .github/copilot-instructions.md

Also, people have discovered that you will not be able to name your variables "sexHere" and "sexThere," as Copilot intentionally stops working with code that contains predefined forbidden words (discussion).

https://youtu.be/C95drFKy4ss

The agent awakens

GitHub Copilot now looks like Cursor; they even implemented switching to agent mode in the same place. Here, Composer is called "Edit with Copilot," and you can also switch models.

Now, it can also accept a screenshot where we show what's wrong. There's also the ability to upload markdown files to provide context.

I see there's a microphone for voice input, which Cursor doesn't have...

Announcement: https://github.blog/news-insights/product-news/github-copilot-the-agent-awakens/

The website https://cursor.directory/

is a directory of .cursorrules templates
for Cursor AI and other ai-ide

covering various topics
(currently, there aren't many templates available).

In the Learn section (learn), video tutorials are compiled on diverse subjects, ranging from a basic introduction to Cursor AI to more advanced development techniques such as Composer and Rules, as well as integration with other platforms and services.

Similar websites include:

On the Aider LLM Leaderboards, for several days now, the first place has been occupied not by a single model, but by the combination of DeepSeek R1 and claude-3-5-sonnet-20241022.

This setup likely inspired the creation of the

DeepClaude 0.1.0
https://deepclaude.com/

Currently, DeepClaude is a very minimalist BYOK (Bring Your Own Keys) open-source system built in Rust - repo. It was created by An Asterisk as a side project and requires users to pay two providers and input their API keys. Interestingly, it’s unclear why the developers didn’t opt to integrate with OpenRouter.ai instead — perhaps someone will code this.

The system can be used on their website or deployed locally. It’s not an IDE or a plugin — just a chat interface powered by two models.

https://www.youtube.com/watch?v=gYLNxUxVomY

Main ideas from the video "I spent 400+ hours in Cursor, here's what I learned":

📂 Create a folder for instructions or prompts.
Use markdown files and comments in the code to describe the project, technology stack, technology, and database structure, rules of behavior, and important instructions. Regularly updating these files helps AI better understand the project context. Cursor does not always parse framework/library documentation well, so it's better to copy them - check the documentation yourself!

Create a file roadmap.md to track the project's status, current goals, and future steps.

Use additional tools, such as Perplexity for searching information and Whisper Flow for voice input. Speaking is much faster than typing! Separately understand your architecture and brainstorm better concepts, this can be done with a "thinking" chat like o1. Do not let AI make big decisions.

⚠️ Danger: Not understanding the code created by AI accumulates "technical debt".

(Video from January 17 regarding r1 and o3) Choosing the right AI model is important - when generating code, Sonnet 3.5 has an advantage, as it provides high accuracy. Queries need to be built in detail: describe the task, add file tags, instructions, and context.

🗣 Chat: For questions, getting information, lists.
✍️ Composer: For making changes to the code, especially complex ones, use Agent mode.

There are tricks like "the less code, the better" or "think like a senior developer", "start with three paragraphs of thoughts", "do not delete comments", to get quality results. Do not overload AI with large instructions, but break tasks into parts.

  • The fewer lines of code, the better
  • Proceed like a Senior Developer
  • DO NOT STOP WORKING until...

add:

  • v0: For quick creation of first design versions. 🎨
  • Claude chat: For consultations, brainstorming. 🤔
  • Lavable/Bolt: For quick creation of MVP backend (with caution, control technical debt).

mentions https://repoprompt.com/

https://github.com/block/goose

Goose1.0.4

Project Goose - an open-source artificial intelligence agent. Offers both a desktop interface and a command-line interface (CLI). It uses extensionsto connect to existing tools and applications, such as GitHub or JetBrains development environment, thanks to MCP. The catalog is still very minimalist.

Goose is multifunctional, supporting various LLM providers and capable of handling a wide range of engineering tasks, including
code migration, project onboarding with programming languages, code refactoring, performance evaluation, code coverage improvement, API framework creation, and modular test generation..

Although it was initially focused on engineering, its open nature encourages the community to explore various applications.

Fortunately, not in Python, but in Rust. It supports macOS and Linux. In Windows, only through WSL ( Windows Subsystem for Linux).

https://www.reddit.com/r/cursor/comments/1ienr72/o3mini_is_out_to_all_cursor_users/?rdt=53528
and https://codeium.com/changelog

Windsurf Cascade and Cursor added the o3-mini model from openai.

Scroll to top #cursor #githubcopilot #windsurf #promts #mcp #zed #autocomplete #bolt #lovable #supabase #newllmmodel