The example configuration is Vimscript, with a heredoc containing some
Lua code. Using "lua" as a language identifier results neither the
Vimscript portion nor the Lua portion being highlighted propertly.
Mark the code block as "vim". Neovim (with treesitter) properly
highlights the outer block as Vimscript, and the Lua heredoc as Lua
code.
* fix: don't blow up when `nvim_buf_get_lines()` returns Blobs
Some LSP servers may return binary garbage and `nvim_buf_get_lines()`
will return a `Blob` instead of a `String` in those cases.
I added some `print(vim.inspect())` debugging in
`entry.get_documentation()` to prove that by the time the text passes
through there, it's already garbage.
Here's an excerpt from a sample line returned by `nvim_buf_get_lines()`,
as rendered by `vim.inspect()`:
default\0\0\0! vim.opt.background = 'dark'\0\0\0000
(etc)
Now, this looks like an LSP bug to me, but I think we shouldn't allow
buggy LSP output to crash nvim-cmp. "Be conservative in what you send,
be liberal in what you accept" and all that.
So, degrade by coercing any `Blob` we see into a `String` before passing
it to `strdisplaywidth()`.
Closes: https://github.com/hrsh7th/nvim-cmp/issues/820
* add comment
---------
Co-authored-by: hrsh7th <629908+hrsh7th@users.noreply.github.com>
* perf: avoid creating closure in cache.ensure and drop some cached getters
This mainly addresses the perf issue on large amount of calls to
`entry.new`. Previously every `cache.ensure` calls in the code path of
it creates an anonymous function, and it seems that luajit just could
not inline it. Function creation is not expensive in luajit, but that
overhead is noticeable if every `cache.ensure` call creates a function.
The first improvemnt is to solidate the cache callback and attach it to
the metatable of `entry`. This ensures that every created entry instance
share the same cache callback and no new functions will be frequently created,
reduces the ram usage and GC overhead.
To improve it further, some frequently accessed fields of entry like
`completion_item` and `offset` is refactored to use simple table access
instead of getter pattern. The current cached getter is implemented
using `cache.ensure`, which introduces two more levels of function calls
on each access: `cache.key` and `cache.get`. The overhead is okay if but
noticeable if entries amount is quite large: you need to call 4 functions on
a simple `completion_item` field access for each item.
All of the changes done in the commit is just constant time
optimization. But the different is huge if tested with LS providing
large amount of entries like tailwindcss.
* perf: delay fuzzy match on displayed vim item
`entry.get_vim_item` is a very heavy call, especially when user do
complex stuff on item formatting. Delay its call to window displaying to
let `performance.max_view_entries` applied to it.
* remove unneeded fill_defaults
* update gha
---------
Co-authored-by: hrsh7th <629908+hrsh7th@users.noreply.github.com>
* fix(feedkeys): resolve issue with some copilot completions
* fix(feedkey): further adjustments
* fix: missed flag from testing
* fix(feedkeys): error handle and make tests pass
- correct view.follow_cursor to view.entries.follow_cursor
- mention that view.entries.follow_cursor is custom view only
- add missing view.entries.selection_order option
- mention the docs class in list of classes nested under view class
* feat: add option for custom entry view to follow cursor
Creates an option to allow the custom entries
view to follow the user's cursor as they type.
To enable, set
```lua
require("cmp").setup({
view = {
entries = {
follow_cursor = true
}
}
})
```
Original source at 7569056388Closes#1660
Co-authored-by: lvimuser <109605931+lvimuser@users.noreply.github.com>
* doc: add view.follow_cursor option to docs
---------
Co-authored-by: lvimuser <109605931+lvimuser@users.noreply.github.com>