Lollms VS Coder is a powerful, AI-powered Visual Studio Code extension that brings a suite of intelligent tools directly into your editor. It leverages any Lollms-compatible API (local or remote) to provide advanced code assistance, autonomous agent capabilities, context-aware chat, inline autocompletion, diagram rendering, and much more.
Now available in English, French, Spanish, German, Arabic, and Chinese (Simplified)! π
In an era where every tool has an "Agent," the difference lies in Autonomy, Vision, and Sovereignty. Lollms VS Coder isn't just a plugin; it's a private command center for autonomous software engineering.
| Feature | Lollms VS Coder | Continue.dev | GitHub Copilot (Pro+) |
|---|---|---|---|
| Autonomy | Full (Architect/Worker Loop) | High (Slash Agents) | High (Copilot Workspace) |
| Compute Liberty | βοΈ/π Hybrid (Local + Remote) | β Flexible | βοΈ Cloud Only |
| Model Agnostic | π Absolute (Any binding) | β Flexible | π Locked to MS/OpenAI |
| Digital Sovereignty | π 100% Local/Private | β Cloud Mandatory | |
| Structural Vision | β Integrated Visual Graphs | β Text/Code Only | β Chat-based diagrams |
| Collective IQ | β Hybrid Herd Mode | β Single Model | |
| Project Memory | β Long-term Project Facts | β Session-based |
While other tools treat your project as a giant text file, Lollms builds a Live Architecture Graph. It provides a visual Head-Up Display (HUD) of your function calls and class hierarchies. Both you and the AI "see" the structural impact of changes in real-time, preventing the "spaghetti code" common with blind AI generation.
Don't trust one model? Lollms orchestrates a Cross-Provider Debate. You can have a fast Ollama model handle the boilerplate, a Groq-powered model critique the logic, and a DeepSeek or Claude instance finalize the architecture. This multi-perspective verification is the gold standard for mission-critical code.
Lollms is built on the philosophy of Digital Independence. It doesn't just "support" local models; it was born for them. By design, there is zero telemetry and zero hidden data exfiltration. Whether you are running a 100% air-gapped Ollama instance or connecting to high-performance Remote APIs like Groq or Anthropic, your project orchestration remains private and local. It is the only professional tool fully aligned with European AI Act transparency and data residency standards.
Lollms gives you the power to optimize for Cost, Speed, or Intelligence.
- Stay Local: Use Llama.cpp or Ollama for total privacy and zero cost.
- Go Remote: Connect to OpenAI, Anthropic, Google Gemini, or Groq for state-of-the-art reasoning.
- Mix & Match: Configure your Herd to use local models for review and remote models for drafting.
Ever notice how AI assistants repeat the same naming or logic mistakes every day? Lollms uses Project Memory to save technical constraints, architectural decisions, and bug-fix patterns permanently within your .lollms folder. It learns your projectβs unique "DNA" and never forgets it.
| Feature | Description |
|---|---|
| π€ Autonomous Agent | Give the AI a complex objective, and it will generate and execute a multi-step plan, including creating files, writing code, running commands, and self-correcting. |
| β‘ Quick Edit Companion | A lightweight, floating window for fast code edits, explanations, or questions without leaving your current context (Ctrl+Shift+L). |
| π§ Smart Context | A sidebar file tree lets you precisely control which files and folders the AI can "see." Includes an AI-powered auto-selection tool to find relevant context for any task. |
| π Smart Edits | Apply AI-generated code directly to your files with a single click, supporting both full-file updates and diff patching. |
| π Personalities | Switch between specialized AI personas like "Python Expert", "Senior Architect", or "Security Reviewer" to tailor the AI's behavior to your current task. |
| π΅οΈ Commit Inspector | Analyze git commits for security vulnerabilities, bugs, and code quality issues with a single click. |
| π Jupyter Integration | Enhance your data science workflow with tools to generate, explain, visualize, and fix notebook cells. |
- Install lollms-vs-coder from the Visual Studio Marketplace.
- Open the Lollms sidebar in VS Code (click the Lollms icon in the activity bar).
- In the Actions view, click the Settings item to open the configuration panel.
- Enter your Lollms API Host (e.g.,
http://localhost:9642) and select your desired model.
The Lollms Chat is your central hub for interacting with the AI.
- Start a Chat: Click the
+icon in the Discussions sidebar view. - Manage Context: Use the AI Context Files view to control what the AI sees:
- β Included: The AI reads the full file content.
- π Tree-Only: The AI sees the file path but not the content (saves tokens).
- π« Excluded: The file is hidden from the AI.
- Attach Files: Click the paperclip icon or drag & drop images and documents directly into the chat area.
Customize how the AI behaves for each specific discussion by clicking the Discussion Settings (Gear Icon βοΈ) inside the chat panel.
For complex tasks requiring logic and reasoning, enable Thinking Mode.
- Open Discussion Settings.
- Select a Reasoning Strategy:
- Chain of Thought: Forces the AI to show its step-by-step reasoning.
- Plan and Solve: Creates a plan before executing.
- Self-Critique: The AI checks its own answer for errors before responding.
- No Think: Disables reasoning for faster, direct answers.
Toggle Web Search in the settings to allow the AI to browse the internet for up-to-date information (requires Google Custom Search configuration).
When the AI generates code, it provides interactive buttons to apply the changes directly to your project.
If the AI generates a File: path/to/file.ext block:
- Click βοΈ Apply to File.
- A diff view will open, allowing you to review the changes before saving.
If the AI generates a Diff: or patch block:
- Click βοΈ Apply Patch.
- The extension attempts to intelligently apply the diff to the target file.
- Insert: Inserts the code block at your current cursor position in the active editor.
- Replace: Replaces your current selection with the generated code.
Press Ctrl+Shift+L (or Cmd+Shift+L on Mac) to open the Companion Panel. This is a persistent, floating window designed for rapid iteration.
- Context Aware: Automatically tracks your active editor selection.
- Attach/Detach: Pin the companion to a specific file or selection to keep the context fixed while you navigate other files.
- History: Keeps a local history of your quick interactions.
Lollms VS Coder supercharges your .ipynb notebooks with context-aware AI tools found in the cell toolbar:
- $(book) Educative Notebook: Generates a comprehensive, step-by-step notebook on a topic.
- $(sparkle) Enhance: Refactors and improves the code in the current cell.
- $(wand) Generate Next: Reads the current cell and generates the logical next step.
- $(info) Explain: Adds a markdown cell explaining the logic of the code cell.
- $(graph) Visualize: Generates code to visualize the data in the cell's output.
- $(debug-restart) Fix Error: If a cell execution fails, a "Fix with Lollms" button appears to analyze and fix the error.
When in Agent Mode, the AI can autonomously use tools to complete complex objectives:
| Tool Category | Tools |
|---|---|
| File Operations | read_file, generate_code (create/overwrite), list_files, search_files |
| Execution | execute_command (Shell), execute_python_script |
| Research | search_web (Google), search_arxiv (Papers), scrape_website |
| Python | create_python_environment, install_python_dependencies, set_vscode_python_interpreter |
| Planning | edit_plan (Dynamic self-correction) |
| Context | auto_select_context_files, read_code_graph, request_user_input |
| Creative | generate_image |
Contributions, issues, and feature requests are welcome! Feel free to check the issues page.
This project is licensed under the Apache-2.0 License - see the LICENSE file for details.