CodeWithLLM-Updates
-

This video

shows the use of Tabby as a local replacement for GitHub Copilot for code autocompletion and function generation. The system will work without the Internet. The code does not leave your computer.

The StarCoder-3B model is used in VSCode. The installation and configuration of Tabby via Docker on a machine with an NVIDIA GeForce RTX 3070 GPU is considered.

🛠 Installation and configuration of Tabby via Docker
⚙️ Model selection (StarCoder, CodeLlama, DeepseekCoder)
💡 Possibility to run Tabby on a machine with a GPU or CPU - what needs to be installed for CUDA to work

The author believes that Tabby is more convenient to use than the ollama server.