This project implements a very simple copilot-like experience in a terminal-based editor (neovim) using only local LLMs. We are exploring 2 things here.
- can we create copilot-like experiences using only local LLMs?
- how easily can we add llm prompts to a terminal-based editor like neovim?
Here's an example of our simple copilot in action. This is using llama3 running in Ollama.
This is distributed as a standard neovim plugin module. After installing, highlight some text in the buffer
and type <leader>ai
to ask the LLM a question about the highlighted text.
If you're using lazy, then add docker/labs-nvim-copilot
to your setup.
require('lazy').setup(
{
{
'docker/labs-nvim-copilot',
lazy=false,
dependencies = {
'Olical/aniseed',
'nvim-lua/plenary.nvim',
'hrsh7th/nvim-cmp'
},
config = function(plugin, opts)
require("dockerai").setup(
{attach = bufKeymap})
end,
},
{
'hrsh7th/nvim-cmp',
dependencies = {'hrsh7th/cmp-buffer',
'hrsh7th/cmp-nvim-lsp', }
},
}
)
If you have Ollama installed installed and running, Docker AI will use it. Docker AI will not start Ollama - if you want to use it, you'll have to start it separately
- :DockerDebug - download internal representations of project context for debug
# docker:command=build
make