Skip to content

Configuring the Chat Buffer

By default, CodeCompanion provides a chat interaction that uses a dedicated Neovim buffer for conversational interaction with your chosen LLM. This buffer can be customized according to your preferences.

Please refer to the config.lua file for a full list of all configuration options.

Diff

CodeCompanion has built-in inline and split diffs available to you. If you utilize the insert_edit_into_file tool, then the plugin can update files and buffers and a diff will be created so you can see the changes made by the LLM. The inline is the default diff.

Depending on which provider you choose, there are different configuration options available to you:

lua
require("codecompanion").setup({
  display = {
    diff = {
      enabled = true,
      provider = providers.diff, -- inline|split|mini.diff
    },
  },
})
lua
require("codecompanion").setup({
  display = {
    diff = {
      provider_opts = {
        inline = {
          layout = "float", -- float|buffer - Where to display the diff
          opts = {
            context_lines = 3, -- Number of context lines in hunks
            dim = 25, -- Background dim level for floating diff (0-100, [100 full transparent], only applies when layout = "float")
            full_width_removed = true, -- Make removed lines span full width
            show_keymap_hints = true, -- Show "gda: accept | gdr: reject" hints above diff
            show_removed = true, -- Show removed lines as virtual text
          },
        },
      },
    },
  },
})
lua
require("codecompanion").setup({
  display = {
    diff = {
      provider_opts = {
        split = {
          close_chat_at = 240, -- Close an open chat buffer if the total columns of your display are less than...
          layout = "vertical", -- vertical|horizontal split
          opts = {
            "internal",
            "filler",
            "closeoff",
            "algorithm:histogram", -- https://adamj.eu/tech/2024/01/18/git-improve-diff-histogram/
            "indent-heuristic", -- https://blog.k-nut.eu/better-git-diffs
            "followwrap",
            "linematch:120",
          },
        },
      },
    },
  },
})
lua
require("codecompanion").setup({
  display = {
    chat = {
      diff_window = {
        ---@return number|fun(): number
        width = function()
          return math.min(120, vim.o.columns - 10)
        end,
        ---@return number|fun(): number
        height = function()
          return vim.o.lines - 4
        end,
        opts = {
          number = true,
        },
      },
    },
  },
})

The keymaps for accepting and rejecting the diff sit within the inline interaction configuration and can be changed via:

lua
require("codecompanion").setup({
  interactions = {
    inline = {
      keymaps = {
        accept_change = {
          modes = { n = "gda" }, -- Remember this as DiffAccept
        },
        reject_change = {
          modes = { n = "gdr" }, -- Remember this as DiffReject
        },
        always_accept = {
          modes = { n = "gdy" }, -- Remember this as DiffYolo
        },
      },
    },
  },
})

Keymaps

NOTE

The plugin scopes CodeCompanion specific keymaps to the chat buffer only.

You can define or override the default keymaps to send messages, regenerate responses, close the buffer, etc. Example:

lua
require("codecompanion").setup({
  interactions = {
    chat = {
      keymaps = {
        send = {
          modes = { n = "<C-s>", i = "<C-s>" },
          opts = {},
        },
        close = {
          modes = { n = "<C-c>", i = "<C-c>" },
          opts = {},
        },
        -- Add further custom keymaps here
      },
    },
  },
})

The keymaps are mapped to <C-s> for sending a message and <C-c> for closing in both normal and insert modes. To set other :map-arguments, you can use the optional opts table which will be fed to vim.keymap.set.

Prompt Decorator

It can be useful to decorate your prompt, prior to sending to an LLM, with additional information. For example, the GitHub Copilot prompt in VS Code, wraps a user's prompt between <prompt></prompt> tags, presumably to differentiate the user's ask from additional context. This can also be achieved in CodeCompanion:

lua
require("codecompanion").setup({
  interactions = {
    chat = {
      opts = {
        ---Decorate the user message before it's sent to the LLM
        ---@param message string
        ---@param adapter CodeCompanion.Adapter
        ---@param context table
        ---@return string
        prompt_decorator = function(message, adapter, context)
          return string.format([[<prompt>%s</prompt>]], message)
        end,
      }
    }
  }
})

The decorator function also has access to the adapter in the chat buffer alongside the context table (which refreshes when a user toggles the chat buffer).

Slash Commands

IMPORTANT

Each slash command may have their own unique configuration so be sure to check out the config.lua file

Slash Commands (invoked with /) let you dynamically insert context into the chat buffer, such as file contents or date/time.

The plugin supports providers like telescope, mini_pick, fzf_lua and snacks.nvim. By default, the plugin will automatically detect if you have any of those plugins installed and duly set them as the default provider. Failing that, the in-built default provider will be used. Please see the Chat Buffer usage section for information on how to use Slash Commands.

lua
require("codecompanion").setup({
  interactions = {
    chat = {
      slash_commands = {
        ["file"] = {
          -- Use Telescope as the provider for the /file command
          opts = {
            provider = "telescope", -- Can be "default", "telescope", "fzf_lua", "mini_pick" or "snacks"
          },
        },
      },
    },
  },
})
lua
require("codecompanion").setup({
  interactions = {
    chat = {
      slash_commands = {
        ["file"] = {
          keymaps = {
            modes = {
              i = "<C-f>",
              n = { "<C-f>", "gf" },
            },
          },
        },
      },
    },
  },
})
lua
require("codecompanion").setup({
  interactions = {
    chat = {
      slash_commands = {
        ["image"] = {
          ---@param opts { adapter: CodeCompanion.HTTPAdapter }
          ---@return boolean
          enabled = function(opts)
            return opts.adapter.opts and opts.adapter.opts.vision == true
          end,
        },
      },
    },
  },
})
lua
require("codecompanion").setup({
  interactions = {
    chat = {
      slash_commands = {
        ["git_files"] = {
          description = "List git files",
          ---@param chat CodeCompanion.Chat
          callback = function(chat)
            local handle = io.popen("git ls-files")
            if handle ~= nil then
              local result = handle:read("*a")
              handle:close()
              chat:add_context({ role = "user", content = result }, "git", "<git_files>")
            else
              return vim.notify("No git files available", vim.log.levels.INFO, { title = "CodeCompanion" })
            end
          end,
          opts = {
            contains_code = false,
          },
        },
      },
    },
  },
})

Credit to @lazymaniac for the inspiration for the custom slash command example.

Tools

Tools perform specific tasks (e.g., running shell commands, editing buffers, etc.) when invoked by an LLM. Multiple tools can be grouped together. Both can be referenced with @ when in the chat buffer:

lua
require("codecompanion").setup({
  interactions = {
    chat = {
      tools = {
        ["my_tool"] = {
          description = "Run a custom task",
          callback = require("user.codecompanion.tools.my_tool")
        },
        groups = {
          ["my_group"] = {
            description = "A custom agent combining tools",
            system_prompt = "Describe what the agent should do",
            tools = {
              "cmd_runner",
              "insert_edit_into_file",
              -- Add your own tools or reuse existing ones
            },
            opts = {
              collapse_tools = true, -- When true, show as a single group reference instead of individual tools
            },
          },
        },
      },
    },
  },
})

When users introduce the group, my_group, in the chat buffer, it can call the tools you listed (such as cmd_runner) to perform tasks on your code.

A tool is a CodeCompanion.Tool table with specific keys that define the interface and workflow of the tool. The table can be resolved using the callback option. The callback option can be a table itself or either a function or a string that points to a luafile that return the table.

Enabling Tools

Tools can be conditionally enabled using the enabled option. This works for built-in tools as well as an adapter's own tools. This is useful to ensure that a particular dependency is installed on the machine. You can use the :CodeCompanionChat RefreshCache command if you've installed a new dependency and want to refresh the tool availability in the chat buffer.

lua
require("codecompanion").setup({
  interactions = {
    chat = {
      tools = {
        ["grep_search"] = {
          ---@param adapter CodeCompanion.HTTPAdapter
          ---@return boolean
          enabled = function(adapter)
            return vim.fn.executable("rg") == 1
          end,
        },
      }
    }
  }
})
lua
require("codecompanion").setup({
  openai_responses = function()
    return require("codecompanion.adapters").extend("openai_responses", {
      available_tools = {
        ["web_search"] = {
          ---@param adapter CodeCompanion.HTTPAdapter
          enabled = function(adapter)
            return false
          end,
        },
      },
    })
  end,
})

Approvals

Some tools, such as cmd_runner, require the user to approve any commands before they're executed. This can be changed by altering the config for each tool:

lua
require("codecompanion").setup({
  interactions = {
    chat = {
      tools = {
        ["cmd_runner"] = {
          opts = {
            require_approval_before = false,
          },
        },
      }
    }
  }
})

You can also force any tool to require your approval by adding in opts.require_approval_before = true.

Auto Submit (Recursion)

When a tool executes, it can be useful to automatically send its output back to the LLM. This can be achieved by the following options in your configuration:

lua
require("codecompanion").setup({
  interactions = {
    chat = {
      tools = {
        opts = {
          auto_submit_errors = true, -- Send any errors to the LLM automatically?
          auto_submit_success = true, -- Send any successful output to the LLM automatically?
        },
      }
    }
  }
})

Default Tools

You can configure the plugin to automatically add tools and tool groups to new chat buffers:

lua
require("codecompanion").setup({
  interactions = {
    chat = {
      tools = {
        opts = {
          default_tools = {
            "my_tool",
            "my_tool_group"
          }
        },
      }
    }
  }
})

This also works for extensions.

User Interface (UI)

NOTE

The other plugins section contains installation instructions for some popular markdown rendering plugins

Auto Scrolling

By default, the page scrolls down automatically as the response streams, with the cursor placed at the end. This can be distracting if you are focusing on the earlier content while the page scrolls up away during a long response. You can disable this behavior using a flag:

lua
require("codecompanion").setup({
  display = {
    chat = {
      auto_scroll = false,
    },
  },
})

TIP

If you move your cursor while the LLM is streaming a response, auto-scrolling will be turned off.

Completion

By default, CodeCompanion looks to use the fantastic blink.cmp plugin to complete variables, slash commands and tools. However, you can override this in your config:

lua
require("codecompanion").setup({
  interactions = {
    chat = {
      opts = {
        completion_provider = "cmp", -- blink|cmp|coc|default
      }
    }
  }
})

The plugin also supports nvim-cmp, a native completion solution (default), and coc.nvim.

Context

It's not uncommon for users to share many items, as context, with an LLM. This can impact the chat buffer's UI significantly, leaving a large space between the LLM's last response and the user's input. To minimize this impact, the context can be folded:

lua
require("codecompanion").setup({
  display = {
    chat = {
      icons = {
        chat_context = "📎️", -- You can also apply an icon to the fold
      },
      fold_context = true,
    },
  },
})

Layout

The plugin leverages floating windows to display content to a user in a variety of scenarios, such as with the Super Diff, debug window or agent permissions. You can change the appearance of the chat buffer by changing the display.chat.window table in your configuration.

lua
require("codecompanion").setup({
  display = {
    chat = {
      -- Change the default icons
      icons = {
        buffer_sync_all = "󰪴 ",
        buffer_sync_diff = " ",
        chat_context = " ",
        chat_fold = " ",
        tool_pending = "  ",
        tool_in_progress = "  ",
        tool_failure = "  ",
        tool_success = "  ",
      },
    },
  },
})
lua
require("codecompanion").setup({
  display = {
    chat = {
      window = {
        buflisted = false, -- List the chat buffer in the buffer list?
        sticky = false, -- Chat buffer remains open when switching tabs

        layout = "vertical", -- float|vertical|horizontal|buffer
        full_height = true, -- for vertical layout
        position = nil, -- left|right|top|bottom (nil will default depending on vim.opt.splitright|vim.opt.splitbelow)

        width = 0.5, ---@type number|"auto" using "auto" will allow full_height buffers to act like normal buffers
        height = 0.8,
        border = "single",
        relative = "editor",

        -- Ensure that long paragraphs of markdown are wrapped
        opts = {
          breakindent = true,
          linebreak = true,
          wrap = true,
        },
      },
    },
  },
})
lua
require("codecompanion").setup({
  display = {
    chat = {
      -- Alter the sizing of the debug window
      debug_window = {
        ---@return number|fun(): number
        width = vim.o.columns - 5,
        ---@return number|fun(): number
        height = vim.o.lines - 2,
      },
    },
  },
})
lua
require("codecompanion").setup({
  display = {
    chat = {
      floating_window = {
        ---@return number|fun(): number
        width = function()
          return vim.o.columns - 5
        end,
        ---@return number|fun(): number
        height = function()
          return vim.o.lines - 2
        end,
        row = "center",
        col = "center",
        relative = "editor",
        opts = {
          wrap = false,
          number = false,
          relativenumber = false,
        },
      },
    },
  },
})

Reasoning

An adapter's reasoning is streamed into the chat buffer by default, under a h3 heading. By default, this output will be folded once streaming has been completed. You can turn off folding and hide reasoning output altogether:

lua
require("codecompanion").setup({
  display = {
    chat = {
      icons = {
        chat_fold = " ",
      },
      fold_reasoning = false,
      show_reasoning = false,
    },
  },
})

Roles

The chat buffer places user and LLM responses under a H2 header. These can be customized in the configuration:

lua
require("codecompanion").setup({
  interactions = {
    chat = {
      roles = {
        ---The header name for the LLM's messages
        ---@type string|fun(adapter: CodeCompanion.Adapter): string
        llm = function(adapter)
          return "CodeCompanion (" .. adapter.formatted_name .. ")"
        end,

        ---The header name for your messages
        ---@type string
        user = "Me",
      }
    }
  }
})

By default, the LLM's responses will be placed under a header such as CodeCompanion (DeepSeek), leveraging the current adapter in the chat buffer. This option can be in the form of a string or a function that returns a string. If you opt for a function, the first parameter will always be the adapter from the chat buffer.

The user role is currently only available as a string.

Others

There are also a number of other options that you can customize in the UI:

lua
require("codecompanion").setup({
  display = {
    chat = {
      intro_message = "Welcome to CodeCompanion ✨! Press ? for options",
      separator = "─", -- The separator between the different messages in the chat buffer
      show_context = true, -- Show context (from slash commands and variables) in the chat buffer?
      show_header_separator = false, -- Show header separators in the chat buffer? Set this to false if you're using an external markdown formatting plugin
      show_settings = false, -- Show LLM settings at the top of the chat buffer?
      show_token_count = true, -- Show the token count for each response?
      show_tools_processing = true, -- Show the loading message when tools are being executed?
      start_in_insert_mode = false, -- Open the chat buffer in insert mode?
    },
  },
})

Variables

Variables are placeholders inserted into the chat buffer (using #). They provide contextual code or information about the current Neovim state. For instance, the built-in #buffer variable sends the current buffer’s contents to the LLM.

You can even define your own variables to share specific content:

lua
require("codecompanion").setup({
  interactions = {
    chat = {
      variables = {
        ["my_var"] = {
          ---Ensure the file matches the CodeCompanion.Variable class
          ---@return string|fun(): nil
          callback = "/Users/Oli/Code/my_var.lua",
          description = "Explain what my_var does",
          opts = {
            contains_code = false,
            --has_params = true,    -- Set this if your variable supports parameters
            --default_params = nil, -- Set default parameters
          },
        },
      },
    },
  },
})

Syncing

Neovim buffers can be synced with the chat buffer. That is, on each turn their content can be shared with the LLM. This is useful if you're modifying a buffer and want the LLM to always have the latest changes.

To enable this by default for the built-in #buffer variable, you can set the default_params option to either diff or all:

lua
require("codecompanion").setup({
  interactions = {
    chat = {
      variables = {
        ["buffer"] = {
          opts = {
            -- Always sync the buffer by sharing its "diff"
            -- Or choose "all" to share the entire buffer
            default_params = "diff",
          },
        },
      },
    },
  },
})

Released under the MIT License.