CodeWithLLM-Updates
-

https://hub.continue.dev/

The creators of the Continue chat plugin for VS Code and JetBrains have added a section on their website that looks like a catalog of assistants for programming.

This is a good idea, because after working in Cursor, for example, the assistant starts accumulating system instructions, command repetitions, MCP server settings, and additionally indexed documents. It would be cool to have "snapshots" of such settings. Currently, catalogs of generic system instructions and MCP catalogs have started to appear online.

Using the example of https://hub.continue.dev/continuedev/clean-code, you can see that these are chat settings packages that consist of the following blocks:

  • Models: Blocks for defining models for different roles, such as chat, autocompletion, editing, embedder, and reranker.
  • Rules: Rule blocks are system instructions; the content of the rules is inserted at the beginning of all chat requests.
  • Docs: Blocks that point to documentation sites that will be indexed locally and can then be referenced as context using @Docs in the chat.
  • Prompts: Prompt blocks are pre-written quick commands, reusable prompts that can be referenced at any time during a chat.
  • Context: Blocks that define a context provider that can be referenced in the chat with @ to get data from external sources such as files, folders, URLs, Jira, Confluence, Github.
  • Data: The plugin automatically collects data on how you create code. By default, this data is stored in .continue/dev_data on your local computer. You can configure your own destinations, including remote HTTP and local file directories.
  • MCP Servers

The problem with this is that it works like software from 5 years ago; you have to click everything yourself and browse the site-catalog. It would be nice if the AI chat itself offered its settings from the available blocks, instead of me looking at them and choosing.

https://www.cursor.com/changelog/chat-tabs-custom-modes-sound-notification

Cursor 0.48.x

Joy=) As I already wrote in the 0.46 branch and after in 0.47, an unpleasant interface decision was made to pack everything into single chat tab, and it was very inconvenient after I got used to two. Finally, everything was fixed, and now it's possible to open as many tabs as you want (!) via "+" and include any modes in them.

For each mode ("agent", "ask", "manual" - previously "Edit" was confused with "Ask" and thus renamed) you can now set your own hotkey and fix the model.

In the settings, you can enable the ability to create additional modes (custom modes) with the choice of model, hotkey, a bunch of settings, MCP, and even a custom instruction. A very good response to the Roo Code plugin!

Also now if in the chat the context window starts to cut off the beginning of the conversation, it will be written in small text at the bottom.

So far there are no new models deepseek-v3.1 and gemini-2.5-pro in the list, but I think they will appear soon.

https://www.zbeegnew.dev/tech/build_your_own_ai_coding_assistant_a_cost-effective_alternative_to_cursor/

The article discusses how to create a cost-effective alternative to AI coding assistants like Cursor by using Claude Pro and the Model Context Protocol (MCP).

Code: https://github.com/ZbigniewTomanek/my-mcp-server

The author, Zbigniew Tomanek, shares their experience of using Claude with MCP to automate the complex task of implementing Kerberos authentication for a Hadoop cluster, reducing a full day's work to minutes.

Main Points:

  1. Problem: AI coding tools like Cursor are expensive and raise privacy concerns.

  2. Solution: Use Claude Pro ($20/month) with a custom-built MCP server to achieve similar functionality without the extra cost and with more control over data.

  3. MCP Explained: MCP is an open protocol that allows applications to provide context to Large Language Models (LLMs). The author uses a Python SDK to build a simple MCP server.

  4. Kerberos Example: The author details how Claude, using the MCP tools, analyzed project files, created a comprehensive plan, generated configuration files, and fixed errors for the Kerberos implementation.

  5. Cost Savings: Using Claude Pro + MCP saves money compared to dedicated AI coding tools.

  6. Data Privacy: Code and data remain on the user's machine, enhancing privacy.

  7. MCP Tools: The author's MCP server includes tools for file system operations, shell command execution, regular expression search, and file editing.

  8. Self-Improvement Loop: Claude can analyze and improve its own tools, leading to AI-optimized interfaces and custom tooling.

  9. Custom MCP Shines: MCP + Claude Pro offers cost-effectiveness, data control, customization, self-improvement, and complex task automation.

https://appgen.groqlabs.com/

Another experimental project that allows you to create small applications (so-called micro-apps) using text descriptions. It is integrated with the Groq platform (not to be confused with Elon Musk's Grok).

Groq's LPU™ Inference Engine is a hardware and software platform that provides very high speed computation, quality, and energy efficiency. It deploys open-source medium-sized models such as llama3, gemma2, deepseek-r1-distill, mixtral, and qwen.

Typically, the context window is small, and in the free version, the limit is 6k tokens per minute, meaning it really can't handle large codebases.

The code is open source: https://github.com/groq/groq-appgen - you can deploy it on your computer by generating an API key in the developer console.

A very interesting feature of Appgen is the ability to click on a pencil-shaped button and draw a sketch of how the interface should look with your mouse.

https://appgen.groqlabs.com/gallery
If desired, you can expand your "creation" into a gallery or view the works of others.

The default model is currently qwen-2.5-coder-32b, but you can switch to deepseek-r1-distill-llama-70b-specdec. Overall, I don't see any value in this other than the ability to compare the approaches to code generation in the available models.

Improvements to the interface of web chat platforms

https://gemini.google.com/
Google has integrated a new Canvas feature into its Gemini AI chat platform. Gemini Canvas allows users to generate code directly within the interface, including HTML, CSS, and JavaScript. The feature also supports Python scripting, web application development, and game creation. Additionally, Python code generated in Canvas can be opened and executed in Google Colab.

All updates are presented in the video.
https://www.youtube.com/watch?v=Yk-Ju-fqPP4

https://claude.ai/chats
Anthropic Claude Artifacts appeared first (mid-2024, publicly available since August). These are results generated during a chat with Claude and displayed in a separate window next to the main dialogue. For code, a "Preview" option appeared. Rendering of React, HTML, and JavaScript code is supported. Libraries such as Lodash and Papa Parse are available.

https://chatgpt.com/
In October 2024, OpenAI responded by adding the Canvas feature to ChatGPT. It became possible to highlight specific sections of code for targeted editing or obtaining explanations. Hotkeys are provided for working with code, such as code checking, adding logs/comments, debugging, and translation into other programing languages. A version history feature allows viewing and restoring previous states of the code. It also became possible to directly execute Python code in the browser using the "Run" button. Rendering of React and HTML code is supported.

I really liked the capabilities of Artifacts to quickly generate small code snippets, but Canvas in my ChatGPT was constantly buggy, glitchy, and lost context, so I quickly stopped using it.

https://chat.mistral.ai/chat
Le Chat Canvas appeared in early 2025 as an interface for collaboration with LLM Mistral. Supports rendering of React and HTML code. Users can highlight code to get explanations or make changes.

Gemini Canvas also has a window for inline queries regarding the highlighted block.

❌ So far, Grok and DeepSeek chats do not have Canvas/Artifacts features.

If you are interested in how system queries to LLM models work in popular systems for programming (and not only), you can go to GitHub to one of the many repositories of "leaked" prompts such as leaked-system-prompts and read them.

There are Cursor Windsurf bolt.new github copilot v0.dev and others.

https://www.youtube.com/watch?v=6g2r2BIj7bw

There is also a group of custom prompts that can change behavior, for example, Cursor/Windsurf as Devin — this is more of a solution for experiments out of curiosity than for work.

https://base44.com/

Base44 is one of the new AI platforms for creating "no-code" web applications. It offers a wide range of development capabilities, including working with databases, integrations, and application logic. The code can be viewed, but the ability to edit it is only available in the paid version (from $20/month).

https://www.youtube.com/watch?v=jKanknpPDI4

Creating applications on Base44 is very fast. You can upload your data - create from a CSV table. It allows you not to worry about authentication, sending mail/sms, API integration and other complex aspects. Recently, one-click integration with the Stripe payment system was added.

There is the ability to use your own domains and create public applications without the need to log in. Built-in analytics tools allow you to track usage and user growth.

If Lovable works better as a landing page creation tool, then Base44 is positioned as a platform for creating full-fledged applications.

https://github.com/przeprogramowani/ai-rules-builder

Online generator of text prompts/project descriptions rules for AI tools like Copilot, Cursor, Windsurf, Aider, Junie.

UI is clunky, gotta click around a lot, but hey, it's a first step towards customization, not just catalogs like i wrote about before.


Tool creators recommend keeping this file tiny. AI can probably guess the project language anyway ;)

Aider example here.

Good rule of thumb: add a "patch" if the AI keeps misunderstanding you. You can even do it from chat, like in Cursor "pls write me a cursorrule to stop doing {this particular thing}".

Example here:

DO NOT GIVE ME HIGH LEVEL SHIT, IF I ASK FOR FIX OR EXPLANATION, I WANT ACTUAL CODE OR EXPLANATION! I DON'T WANT "Here's how you can blablabla"
- Be casual unless otherwise specified
- Be terse
- Suggest solutions that I didn't think about-anticipate my needs
- Treat me as an expert
- Be accurate and thorough
- Give the answer immediately. Provide detailed explanations and restate my query in your own words if necessary after giving the answer
- Value good arguments over authorities, the source is irrelevant
- Consider new technologies and contrarian ideas, not just the conventional wisdom
- You may use high levels of speculation or prediction, just flag it for me
- No moral lectures
- Discuss safety only when it's crucial and non-obvious
- If your content policy is an issue, provide the closest acceptable response and explain the content policy issue afterward
- Cite sources whenever possible at the end, not inline
- No need to mention your knowledge cutoff
- No need to disclose you're an AI
- Please respect my formatting preferences when you provide code.
- Please respect all code comments, they're usually there for a reason. Remove them ONLY if they're completely irrelevant after a code change. if unsure, do not remove the comment.
- Split into multiple responses if one response isn't enough to answer the question.
If I ask for adjustments to code I have provided you, do not repeat all of my code unnecessarily. Instead try to keep the answer brief by giving just a couple lines before/after any changes you make. Multiple code blocks are ok.

https://xata.io/chatgpt

Xata offers a flexibly customizable integration with the OpenAI API for vector embeddings, enabling semantic search and allowing you to ask questions to your database in natural language.

https://github.com/xataio/agent

Xata Agent is an open-source agent that monitors a Postgres database, identifies the causes of problems, and suggests fixes and improvements along with performance issue fixes and indexing suggestions. Support models from OpenAI, Anthropic, and Deepseek.

https://www.youtube.com/watch?v=SLVRdihoRwI

The team is also working on a cloud version. There is a waitlist at the link.

https://wisprflow.ai/

Finally, the Windows version of Wispr Flow is released — a program for voice input of text into any field on the screen.

Of course, this is not directly related to programming, but for vibe-coding, it is the tool through which we explain to the AI in Cursor who is the programmer here, and who cannot even write a few lines of code properly.

Free 2,000 words per week. For more, $15/month.

Autocoder Updates

Cline v3.6.10

Starting from v3.6 they've added Cline API as a provider option, which lets newbies sign up and start using Cline for free (got a $0.5 gift myself) without own API keys.

They've started a separate site https://cline.bot/ where you can create an account (Google or GitHub only) and top it. Credits bought with a credit card come with a 5% + $0.35 + tax surcharge.

The site's got a blog where they share user stories. Also, all MCP plugins are now showcased at https://cline.bot/mcp-marketplace.

They've added a setting to toggle off model switching in Planning/Action modes (disabled by default for new users).

They've dropped a Cline Memory Bank prompt example into the docs.

For those keen to help polish the product, they've added an opt-in for telemetry, but keeping it off by default.


Roo Code v3.8.4

Nice to see some competition in the space.

Roo's taking a different route; no MCP marketplace here, but they've beefed up the functionality for creating different modes with custom system prompts. .rooignore file support is added. Checkpoints, multi-window mode, and communication between subtasks and the main task are more polished.

They've tacked on "Human Relay" as a provider to the nifty The VS Code Language Model API mode, which lets you fire up models provided by other VS Code extensions (including, but not limited to, GitHub Copilot).

So, the plugin spits out a prompt, and we, the humans, do the relay: copy-paste it into a chat on the site, then paste the response back into the window. Right now, it's the only way to use Grok 3. Sure, it'd be slick if some OS-level agent handled this, but hey, it is what it is.

For those wanting to help with product improvement, they've added telemetry opt-in, asking on the first run if you're in.

Cursor, an AI code editor, was created in early 2023 by four MIT graduates who shared a vision of transforming programming.

https://www.youtube.com/watch?v=oFfVt3S51T4

Founders

Michael Truell, Sualeh Asif, Arvid Lunnemark, and Aman Sanger combined their expertise in mathematics, machine learning, and software development to create a tool that significantly enhances developer productivity.

They brought diverse yet complementary experience to Cursor:

  • Michael Truell: A former representative of the USA at the International Mathematical Olympiad (2016-2017), Michael's experience in statistical mathematical research and LLM-based recommendation systems laid a solid foundation for Cursor's artificial intelligence capabilities. His online presence includes:

  • Sualeh Asif: With experience representing Pakistan at the IMO (2016-2018) and a background in machine learning, number theory, and performance engineering, Sualeh made valuable technical contributions to the project. His online presence includes:

  • Arvid Lunnemark: His Quantitative Trading internship at Jane Street enhanced the team's algorithmic expertise, which was crucial for developing Cursor's intelligent features. His online presence includes:

  • Aman Sanger: Previous internships at You.com and Bridgewater Associates, as well as research in machine learning, artificial intelligence, and natural language processing at MIT, directly influenced the development of Cursor. His online presence includes:

Financial Trajectory

https://anysphere.inc/

The company behind Cursor, Anysphere (founded in January 2022 in Buffalo, New York), has demonstrated impressive growth:

  • Initial funding of $11 million, including $8 million from the OpenAI Startup Fund
  • Series A round of $60 million in August 2024 led by Andreessen Horowitz
  • Series B round of $105 million in December 2024 led by Thrive Capital and Andreessen Horowitz, which valued the company at $2.5 billion

This rapid financial progression underscores the market's confidence in Cursor's potential to transform coding practices through the integration of artificial intelligence.

https://github.com/mannaandpoem/OpenManus

Engineers behind the multi-agent framework https://github.com/geekan/MetaGPT and the AI coding service MetaGPT X have launched OpenManus.

OpenManus is an open-source Python project designed to replicate the capabilities of the Chinese Manus AI, a hyped general-purpose artificial intelligence (currently invite-only), known for its ability to autonomously perform complex tasks, including programming.

Currently compared to Manus, there is no GUI - everything needs to be done through the terminal, making it more similar to Claude Code plus MCP. "Improve the UI’s visual appeal to create a more intuitive and seamless user experience." is in the Roadmap.

https://www.youtube.com/watch?v=H1rWVvsjtTQ

OpenManus Manus AI
Free, open source Paid subscriptions
Code can be contributed Limited by vendor functions
Active community members (currently 242 open issues in the repository) Updates controlled by the vendor
Ideal for developers and enthusiasts Targeted at corporate users

Features:

  • Collaborative work of AI agents to solve complex tasks.
  • Easily extensible with new agents, tools, or functions.
  • Integrates with other tools and services.
  • Free to use, pay only for LLM provider tokens (e.g., OpenAI).

https://help.openai.com/en/articles/10119604-work-with-apps-on-macos

ChatGPT for Mac Update

One AI app is controlling another AI app to write code - now anyone with a Mac doesn't have to manually code in Cursor or Windsurf. Advanced Voice conversation mode is also functional. It can display diffs.

Overview: https://www.youtube.com/watch?v=dKCHdlHD97I

In the video, it's compared to Cline, and the presenter says that Cline burns through money via the API and doesn't perform very well - in contrast, you pay for ChatGPT once a month and can use it as much as you want. On the other hand, Cursor or Windsurf, within their own chats, have a better understanding of the code and project context, so their results are likely to be better than ChatGPT's.

Supported Applications:

  • Apple Notes, Notion, TextEdit, Quip
  • Xcode, VS Code, Code Insiders, VSCodium, Cursor, Windsurf, Jetbrains (including Android Studio, IntelliJ, PyCharm, WebStorm, PHPStorm, CLion, Rider, RubyMine, AppCode, GoLand, DataGrip)
  • Terminal, iTerm, Warp, Prompt

https://codeium.com/blog/windsurf-wave-4

Windsurf Wave 4 Update

The main feature is Windsurf Previews, a browser window integrated with Cascade. It can be launched with a chat command. You can click on React or HTML elements to mark them as context. It also sees the front-end console.

In Trae, there's just a quick webview button, and in any VS Code, there's Simple Browser via Command Palette.

https://www.youtube.com/watch?v=ZMFeZEiQ3A4

Add interesting feature: after a Cascade response, it suggests what the next promt could be.

They also added Auto-Linter (already present in Cursor for a while). Tab autocomplete can automatically add imports. They improved the discovery of new MCP servers and work with them.

MCP Servers by default include integrations for GitHub (file operations, repository management, search), Sequential Thinking (dynamic problem-solving), Puppeteer and Playwright (browser automation), PostgreSQL (read-only database access), Stripe (Stripe API integration), Slack (Slack API for Claude), and Figma.

Launched a referral program (are things really that bad?)

Windsurf Rules Directory

Launched their own directory of custom prompts — currently 8, of which 2 are for data science, 3 for front-end. No Nuxt/Vue. In the program itself, next to the link, they mention that these rules are more for inspiration and examples of how to write them.

(There is also a memory function similar to ChatGPT - Cascade can automatically generate memories to retain context between conversations. Memories can be deleted from the Memories Panel.)

Overall, they continue to simplify the product, positioning it as "your first editor" for those who want to start vibe coding with AI.

https://en.wikipedia.org/wiki/Vibe_coding

Business Insider described vibe coding as a new buzzword in Silicon Valley.

Computer scientist Andrej Karpathy coined the term on February 3, 2025. In his X post, Karpathy described that vibe coding is all about completely giving in to the "vibes" of the AI, accepting the exponential power of current LLMs, and forgetting the nitty-gritty details of the code itself:

There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

Describing his own experience, Karpathy explained how he converses with AI tools almost in a passive manner—merely talking to them (thru superwhisper) and having the AI handle the rest. This method eliminates manually typing code as well as keeping track of all the minute information in the program.

https://www.youtube.com/watch?v=YWwS911iLhg

Matthew Berman provides a tutorial and best practices for "vibe coding" (also called "agentic coding"), which involves using AI agents within IDEs like Cursor or Windsurf to write most or all of the code:

  • Detailed Specification: Start with a very detailed specification of the application. Use an AI (like Grok 3 in the video) to help generate this spec, including technical details, database schema, API endpoints, etc.
  • Cursor/Windsurf Rules: Crucially, use "rules" (project-specific or user-specific) to guide the AI agent. These rules act like system messages, defining coding preferences, technology stack, and desired workflows. This prevents many common issues. The presenter provides a detailed example of their rules, explaining the rationale behind each one.
  • Rule Examples (and why they're important):
    • Avoid Code Duplication: The agent tends to duplicate code; rules explicitly tell it to check for existing code.
    • Dev/Test/Prod Environments: The agent struggles with environment separation; rules enforce distinct environments.
    • Focus on Requested Changes: The agent often makes unrelated, breaking changes; rules emphasize staying focused.
    • Avoid New Technologies/Patterns: The agent might introduce new technologies when fixing bugs; rules restrict this.
    • Keep Codebase Clean: Rules encourage organization and prevent excessively large files.
    • Avoid Mock Data (Except in Tests): The agent overuses mock data, leading to false positives; rules limit this.
    • Don't Overwrite .env Files: Prevents accidental API key overwrites.
    • Define Tech Stack: Explicitly list allowed technologies (Python, SQL, etc.) to prevent unwanted deviations.
    • Coding Workflow Preferences: Reinforce focusing on the task, writing thorough tests, and avoiding major architectural changes without explicit instruction.
  • Context Management: Be mindful of the context window within the Cursor/Windsurf chat. Too much context degrades performance. Start new chats strategically, but be aware of losing context.
  • Narrow Requests: Make small, specific requests to the agent (fix a single bug, add a small feature). Test frequently.
  • Testing: Prioritize end-to-end tests (simulating user actions) over unit tests. Have the agent write tests, and carefully review test fixes, as they can sometimes introduce issues.
  • Popular Stacks: Use well-known, widely documented technology stacks (Python, HTML/JS, SQL) for better agent performance.
  • Iteration Cycle: The process can be slow (2-15 minutes per iteration). Consider using multiple Cursor windows/branches for parallel work.
  • Commit Often: Commit frequently to allow for easy rollbacks. Cursor/Windsurf also has chat history-based versioning.
  • YOLO Mode vs. Manual Approval: Cursor offers different execution modes:
    • YOLO Mode: Auto-executes everything (risky, but fast for new projects).
    • Manual: Requires approval for every change.
    • Auto: Agent decides what to execute automatically and what needs approval.

Matthew wishes for a fully hosted, mobile-friendly version of this workflow.

https://developers.googleblog.com/en/data-science-agent-in-colab-with-gemini/

Data Science Agent

Google Colab has received a new AI tool called Data Science Agent that helps users quickly clean data, visualize trends, and gain analytical insights from uploaded datasets.

Data Science Agent is powered by the Gemini 2.0 model and supports CSV, JSON, or .txt files up to 1GB in size.

Review: https://www.youtube.com/watch?v=1Lq2xzqj_S0

The tool, which was initially a standalone project, is now integrated directly into Colab and is available for free, although with limitations on computing resources for free users.

A fork of Claude-code called anon-kode has emerged. This fork removes restrictions on using only Claude services. The project lacks documentation and is clearly experimental.

Originally, Claude-code was designed to work with Claude models, and it's unclear how it will perform with other models. A potentially better alternative might be implementing a proxy mirroring of Anthropic's API for the original project, which continues to be developed by professional engineers.

Kwaak has been released in alpha version - another terminal tool. It enables running autonomous AI agent commands directly from the terminal in parallel. Kwaak is part of the bosun.ai project. An upcoming platform for autonomous code improvement.

A fork of bolt.diy (which itself is a fork of bolt.new) has emerged, called "Nut.new".

While the purpose remains unclear, the developers promise to eventually rewrite the codebase with their own approach focusing on debugging capabilities. Currently, the service can be used for free with limited capacity, or without limitations through an early user program or with your own Anthropic API key. The presentation is available at https://www.replay.io/

https://firebender.com/

Firebender
AI plugin for Android Studio and IntelliJ

Features an Inline editor and chat similar to Cursor. The Agent reviews code, runs code, tests, and edits files independently. There's an option to add custom instructions and an exclusion list via the firebender.json file.

According to the changelog, they started on 2024-12-03, then added code understanding on 2025-01-23, and agent mode on 2025-02-13. The Claude 3.7 Sonnet model has been used since 2025-02-26; they also offer Deepseek R1, OpenAI o1, and o3-mini.

https://www.youtube.com/watch?v=li2q6YH0U4M

I don't work in Android Studio, so I can't say much about it. The website is hastily put together, full of marketing bullshit and generic phrases: obviously designed to sell to an incubator, which they managed to do. The price is also listed as free — that's while they're getting funding. We'll see what happens next and whether they can create a stable product.

https://blog.google/technology/developers/gemini-code-assist-free/

Gemini Code Assist
for individuals (preview)

Google has launched a public preview of Gemini Code Assist — free for individual developers. It offers up to 180,000 code autocompletions per month / 6,000 per day, which is 90 times more than other free assistants (which typically offer 2,000).

It's available in Visual Studio Code, JetBrains IDE, and Android Studio. They've also integrated it into Firebase console. There's also a version for GitHub Apps that offers free code review for both public and private repositories.

Users should note that by default, the service collects data about your prompts, code, edits, and feature usage to improve its machine learning models. While there is an option to opt out of this data collection, the initial settings have data sharing enabled.


Based on my testing, Gemini Code Assist demonstrates strengths: rapid understanding of codebases and built-in search functionality, which significantly enhances access to up-to-date information. However, there are notable weaknesses – the absence of an agent mode that we've grown accustomed to in other tools. Instead of intelligently interacting with the file system, the chat simply generates code snippets that users must manually insert into the appropriate files.

https://github.com/superglue-ai/superglue

superglue is a self-healing open source data connector with LLM-Powered Data Mapping. Automatically generate data transformations, API configuration by analyzing API docs using large language models. Handles pagination, authentication, and error retries. Validates that all data coming through follows that schema, and fixes transformations when they break.

Early access hosted version of superglue at superglue.cloud

bolt.diy has released version v0.0.7 — better support for thinking models, better UI for settings, and netlify one-click deploy.

Inception Labs has released Mercury Coder, an interesting new language model that uses diffusion technique for faster text generation: unlike traditional models that create text word by word, diffusion models generate entire responses simultaneously, refining them from an initially masked state to coherent text. (news) — in their chat it works really fast, for HTML it can display results immediately.

Cursor has added a new OpenAI model gpt-4.5-preview, just to have it, because according to tests, this model is mediocre for programming and more suitable for informal, leisurely vibe conversations. In contrast, the excellent for programming claude-3.7-sonnet model has been in high demand in recent days and wasn't always responsive, but now the situation seems to have stabilized.

Windsurf has also added the claude-3.7-sonnet model in Cascade mode as well as GPT-4.5 in beta.

Their comparison: https://www.youtube.com/watch?v=tztQJ5MKNgs

Trae has added the Claude Sonnet 3.7 model, noting that it may respond slowly.

Zed has added Claude Sonnet 3.7 to Zed AI.