-
-
Notifications
You must be signed in to change notification settings - Fork 895
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not resolve during annotate, enrich documentation with details (Fix #3201) (Fix #3905) #4625
base: master
Are you sure you want to change the base?
Conversation
The implementation in this PR relies on the auto-documentation being automatically triggered. I would like to avoid that and always have the candidate resolved as it's displayed. The annotation update is called for displayed candidates and is a good function to trigger resolving asynchronously. Also, we can configure the client's capability to not have partial completion item responses at all. The reason why we make it have partial completion item responses is to make the completion list return as quickly as possible without unnecessary and/or large item property strings. The other properties can be retrieved later with Please see #4591 for the issue with completion items not being resolved without the document's update as well. To avoid the width suddenly changing, I think the user can disable
This is a good feature. I agree we should do this. |
I think we are beginning to see this one-size-fits-all
The documentation is already resolved synchronously in HEAD, I didn't change this, I only changed detail resolution. Detail should be resolved when it's needed, which is largely determined by the server. Even when
The problem is exactly because the first time the candidates are displayed, they may not be resolved, there is also no guarantee they will be resolved the next time they are displayed, or the third time, they will be resolved whenever the server feels like it's time to send back a response because, asynchronicity, so what ends up happening is the annotation appearing in the completion popup erratically.
Nobody is arguing with that, for languages where it makes sense, like TypeScript or Python, the language servers often do not send down
Fine, but they should not affect how the completion candidate list is displayed, but only how text are inserted or replaced, and displaying documentation. Resolving for insertion, replacement, indentation etc are already done in the exit function. If you want to speed up insertion in case resolution in the exit function is slow, you can call
This is crazy. Are you suggesting that every user should adjust this defcustom buffer-local in mode hooks as opposed to simply shipping with a default behavior that makes sense for the vast majority if not all cases?
CAPF is pull-based. How do you "trigger a refresh" of all the completion frontends now and in the future? Also, what does it have to do with VS Code? |
The The spec from LSP said that
So, from 3.16, instead of the default
I think as long as we provide enough customization for the user, it would be okay as there's no one-size-fits-all solution. The default should be as close to the VsCode behavior as possible. So, if the VsCode doesn't do the candidate annotation (which Emacs does) then we should configure
I'm not sure but the behavior of showing document pop can be argued as erratically as well, as it suddenly appears, blocking since the user will experience hang if the server is slow to return the result, unlike
This would be I think that your main argument is that we shouldn't treat the resolved completion item and the original completion item as a same entity and always use the original completion even if it's lacking information. My counterargument is that they're the same and we should use the latest information if possible. The reason is that it provides more information to the user. Btw, here is an example behavior with a place-holder on annotation string. The code change (defun lsp-completion--annotate (item)
"Annotate ITEM detail."
(-let* (((&plist 'lsp-completion-item completion-item
'lsp-completion-resolved resolved)
(text-properties-at 0 item))
((&CompletionItem :detail? :kind? :label-details?) completion-item))
(lsp-completion--resolve-async item #'ignore)
(concat (when lsp-completion-show-detail
(if resolved
(when detail? (concat " " (s-replace-regexp "\r" "" detail?)))
" <loading...>"))
(when (and lsp-completion-show-label-description label-details?)
(when-let* ((description (and label-details? (lsp:label-details-description label-details?))))
(format " %s" description)))
(when lsp-completion-show-kind
(when-let* ((kind-name (and kind? (aref lsp-completion--item-kind kind?))))
(format " (%s)" kind-name)))))) |
Ok, this PR works just as well. When the initial partial completion item has no detail, after a resolution, the detail will be prepended to the document.
So load them when you need them, I don't know why we keep circling back to this. This PR has nothing to do with these other lazily resolved properties. It's already done in the exit function.
I believe the central issue here is, ts-ls doesn't always return detail in the response of
I don't understand this sentence. Can you rephrase? The response to
Yes, that's why If spamming the server is a problem, these completion frontend should implement debounce with
Yes, which is already handled when corfu-popupinfo/corfu-info/company-quickhelp etc calls
I beg you please don't even try this. I'm working on corfu-pixel-perfect, it does have the ability to refresh, but it's a little complicated for vanilla corfu. I think company box has this ability as well, but not sure about company. Basically, don't do this as this is highly dependent on third-party packages. You don't need it, and the outcome is undesirable as the width will either erratically expand or the candidate line is squished and truncated in all sorts of ways.
I don't think this is VS Code's behavior... |
e30d4dd
to
5bc2096
Compare
Ok here's more information. It turns out, VS Code remembers the last value of ^SPC (Show more or less), and the way to change it is hidden in a hint in the status bar which is off by default. When "Show More" is active, the detail is prepended to the documentation. When "Show Less" is active, the detail is rendered on the popup menu on selection if it is not in the response from In addition, if a completion item has no detail from In order to accomplish this in lsp-mode, we will need to cooperate with completion frontends, I guess this is where your idea of "refresh" comes in. What we can do is, keep the separation of unresolved and resolved completion item as done in this PR , do not resolve async when the annotation function is called, but instead, the completion frontends should call |
1111c94
to
b305fdd
Compare
More reasons to separate the unresolved and resolved completion item: the details for the same label can be different in the responses in textDocument/completion{
"data": {
"cacheId": 964
},
"detail": "@nestjs/common/utils/shared.utils",
"kind": 6,
"label": "isObject",
"sortText": "�16",
"textEdit": {
"insert": {
"end": {
"character": 3,
"line": 2
},
"start": {
"character": 0,
"line": 2
}
},
"newText": "isObject",
"replace": {
"end": {
"character": 3,
"line": 2
},
"start": {
"character": 0,
"line": 2
}
}
}
} completionItem/resolve
|
b072fe5
to
28cb228
Compare
@dgutov moving the slightly off-topic convo from #4591 (comment) to here. This is what's happening to company using lsp-mode since #4610 The problem is, unlike corfu, company doesn't make a copy of the candidate strings before refreshing. Since #4610, any call to the annotation function will stealthily async resolve the completion item in the background, so if reusing the same string references while refreshing, the |
Ouch, that's not great. Does that happen only with some language servers, e.g. the Rust one?
If we copy the strings, then I guess that would mean dropping the
Do both LSP clients retain the full information in the text properties? If there was at least some indirection involved (e.g. a hash table to do a lookup), the refresher callback could replace the contents of the said hash table instead. |
Theoretically this can happen to any language server. There's no guarantee the
TBH, if you are comparing strings with
If by both you mean lsp-mode and eglot, the answer is yes they both store the partial completion item from
This is the naive solution that everybody keeps coming up with that leads to the exact problem I want to solve in this PR. The culprit is not how caching the completion item data is achieved, the problem is the "refresher callback" (I guess you mean the resolution), should not replace the partial cache. Eglot conveniently sidesteps this problem by not resolving in the This PR will solve the problem described in this comment fundamentally and you don't need to do anything about it. I'm just letting you know that's what's happening to company now and the way it is implemented has inadvertently triggered an N+1 request when constructing the candidate list for the popup. And that when this PR is merged, you can use the now public |
I think I got your point now, the detail can change during item resolution process so it's better to keep the unresolved detail (if existed) for the candidate. So, if the unresolved item has no detail before resolving, it should use resolved item's detail to display instead. I think that would solve the issue of both RA and ts-ls. I still believe we should treat the resolved item as the completed version of the unresolve item. So only for items that's displayed immediately (like
One thing I can think of with stealth resolution is that the server doesn't like being spammed about completion item resolve request. But I haven't observed any server like that so far. The communication between LSP client-server can be chatty so I would expect the server to be able to handle that gracefully. |
It's a little subtler than that. The when, where, and how to display the resolved detail matters. This PR only displays the resolved detail when the user requests for documentation, just like VS Code does.
Yes, that's exactly what this PR does.
I feel like we are going in circles. That sync resolve call in If the reason you put that async resolve call in the annotation function is to achieve some kind of "prefetch", all I can tell you is, the only good opportunity to do a "prefetch" is immediately after receiving the response of `textDocument/completion", but if you do that, you'll be issuing N+1 requests, but probably throwing 99% of the responses away on every keystroke. This is exceedingly wasteful for both the server and Emacs for practically no benefits. The users don't need the resolved data if he hadn't asked for it. Stop trying to second-guess the user. There's no good way to know when a user needs what data until he tells you explicitly by performing some UI interaction. Don't solve problems that don't exist, don't optimize for things that nobody had asks for.
What's the relevance of this sentence to the issues discussed here? The whole reason for the existence of
Have you tries this with jdtls? I can guarantee you it'll crash in seconds. It can barely handle all the document/hover and textDocument/codeAction calls. ts-ls often choke as well. |
Theres' no guarantee on that for that to be fast. The function you mention is different from
This is not done on every keystroke; it's only done when you have a new completion set.
I've tried with ts-ls and notice no difference so far, if you have a repo to share that encounter issue with this, I would like to try. |
Another thing to add is the async request to resolve the completion item is done off the hot path, no blocking to the user and not while the user is waiting for new completion items. |
Well, everything in LSP is best effort. If you need to keep Emacs responsive, change it to
You will get a completely new set on every keystroke if the server does not support
Just try editing the typescript-languager-server repo itself with lsp-mode master, turn on company and company-quickhelp, and use ts-ls for TypeScript files. Pick a file with at least a couple hundred of lines, Type "Obj", backspace 3 times, "Arra", backspace, M-n M-p a couple of times, just simulate a burst of editing for a couple of seconds. Then look at the With this PR, lsp-mode doesn't spam the server anymore. Every completionItem/resolve request takes like 3-5ms, occasionally you get a 30+ms response and that's about it.
Ah, no. Did you not see what that async resolve did to company? That's a page of completionItem/resolve requests per textDocument/completion requests. Even with Corfu there are still N+1 requests, you just don't see the effect because Corfu makes a copy of the candidate strings before rendering into the popup, but the requests were still blasted out in the background. What exactly are you trying to archive with async resolve in the annotation function? You never answered me this question. The important things like insertText and textEdit are not in resolveSupport and so you don't need to resolve on every completion insert. Most of the time the only things you want to resolve are the detail and documentation, what's wrong with blocking in lsp-completion--get-documentation? company-quickhelp, company-box and corfu-popupinfo all use a timer so it's not like a blocking call to the server is made on every M-n/M-p. When the user stops at a candidate for some delay, he probably really wants to see the documentation and is willing to wait for the docs, so blocking is exactly the right thing to do. There's no need to prefetch. |
So... would the solution be to use the one or the other for resolving annotations? Sorry I don't have the full context right now.
Okay, but what I see on the first gif is completions being annotate by a wrong string, in bulk. Does that happen to having the same strings used in some other place? Setting aside the "incorrectness" of having the non-owned strings altered like that, which other feature could require such bulk-requesting, rather than
We're talking about comparing identical string references with
Thanks for confirming.
Aside from its use in Company, you mean.
It might be fine, though? If the resolution request is fast enough to be done 10 times in a row, that is. Anyway...
Thanks! [Hopefully N was closer to the length of the popup than the length of the whole completions list.] So the problem is fixable without additional fixes in the frontend, do I get that right? That's a good news.
This does look pretty useful, especially since the main target of this feature probably was the configuration when the documentation popup is disabled (VSC has a shortcut for toggling that). Using an lsp-mode's function directly from Company (or other frontends) doesn't seem advisable, but there are possible ways to have it passed indirectly. First of all, using the Or if it has problems, some other prop-function could be added after we choose a name and description. Async or not. |
To rewind a bit: I meant to list the possible methods that will results in "detail" being rendered on every line of the popup. Which seemed to me @kiennq's UI preference, if I'm recalling it right from his other comments somewhere. I think is a valid preference, just a difficult one to implement language server agnostically, given the current state of the affairs. Using the echo area, as you mention, is functionally equivalent to printing it on the selected line, as far as the current discussion goes. |
@dgutov To rewind a bit, I meant using This means, if we are to agree that we should prefer to implement, or allow for the ability to implement VS Code's behavior, which I think we do, the loading text on every line of the popup is unnecessary, as we clearly do not want eager resolution, async or otherwise to occur by default. I do, however, recognize that VS Code seems to have some secret sauce (the exact logic is nowhere to be found in open source version) that will eagerly resolve some limited amount of items, but that to me is not something desirable due to multiple roundtrips, I don't think it's required by the spec either. I also have a hard time finding an actual example of this parameter being used in the wild. In any case, if you so wish, there's now a |
Sure. Just when discussing backend features, it helps to consider different UIs people might want to be able to build.
I agree.
I've seen people say this, but does VS Code actually pre-cache items, or resolve every completion's detail? As mentioned previously, that doesn't seem to be happening with typescript-language-server, at least: the "detail" is only shown for the current completion. |
One more thing: in that Zulip thread, you've said that lsp-mode has to resolve every item. That doesn't match my testing: it only (with some exceptions) resolves the visible completions, so together with company-mode anyway it makes ~10 requests to "resolve" at a time. See this Swiper output: You can turn on request/response logging using The trouble is, 10 times 23ms sequentially is still 230 ms extra, and if those are resolved in
EDIT: In the v0.166.1-pre build, Zed still seems to issue only one |
Ha thanks, I clearly have misunderstood that question, cos I was a little bit perplexed at the time when Florian brought up VS Code to justify his interpretation of the spec requiring eager resolution, which is completely wrong, but I have completely misunderstood that question as well. I'll make a clarification over there. |
It doesn't have anything to do with Anyway, it doesn't matter, it's not required by the spec and doesn't affect any desirable outcome. Even the VS Code docs seems to suggest one does best to avoid eager resolution. |
1720f2d
to
20c3f98
Compare
I wasn't sure if this was the best place to ask this question, but it seems like the right people are here that could answer it, especially if this particular area of I'm seeing an issue which I believe is related to the to lazy resolution of the completion detail. I'm using an Ada Language Server which does not initially provide the detail in the "textDocument/completion" response. As a result, the detail is initially missing from the completion UI. The problem I'm seeing is that overloaded functions don't appear for all instances, only one instance is displayed in the The only way I was able to get all instances to show up was to add advice around (defun init.el/company-capf (oldfun &rest r)
(unless (eq (car r) 'duplicates)
(apply oldfun r)))
(advice-add 'company-capf :around #'init.el/company-capf) It seems like either |
20c3f98
to
5dc3030
Compare
Given #4644 was just merged, if you give master a try, does the Ada server return
Absolutely not. It turns out, VS Code probably hasn't used the That said,
|
Unfortunately it doesn't appear the server supports "labelDetails". The following shows a partial log for one particular overloaded instance including the resolve request/response.
Here are screenshots showing VSCode, lsp-mode/Company and lsp-mode/Corfu respectively for the same completion.
I don't disagree with your suggested strategy, however it seems for the example I've shown above, it would fall into Step 3 above resulting in the overloaded functions being de-duplicated by Company...essentially no difference. I think with the above strategy, one of the following has to change:
|
Can you elaborate on why this is a bug? This field is optional, and defaults to the "label" when not present. Why would this be needed? |
537f6c0
to
82cf8fc
Compare
Every field in IMHO, Ada language server is just relying on an undocumented behavior in VS Code that's not in the spec. |
b68fc2c
to
b9a428f
Compare
The Ada language server does return the signature in the "detail" as can be seen in the VSCode documentation pop-up window, as well as in the "completionItem/resolve" response. The server is also returning unique snippet expansions for each signature (i.e., "insertText"), therefore you don't want this de-duped on the server or by Company. I'd be surprised that I'm the only one running into this issue. I guess I'll just keep the advice on |
I mean VS Code doesn't use the Your issue has nothing to do with this issue or even lsp-mode before or after #4610 , I suggest you to file an issue on the Ada language server. The easiest way to dedup on the client is to send down |
b9a428f
to
5afc19f
Compare
Problem
When using
typescript-language-server
, the initial call totextDocument/completion
does not return anydetail
ordocumentation
for any of the completion items. I suppose the reason for this is many Javascript signatures are extremely long, often they are 5x to 10x longer than the label, they are unreadable when displaying beside the label on one line, so the server forces the client to makecompletionItem/resolve
requests to resolve the item detail and documentation individually, and it's up to the client to prepend the signature to the documentation, as is done in VS Code.VS Code Typescript
This approach presents a problem to
lsp-mode
in that the CAPF function caches the partial completion item response as a text property on each candidate string, and when a completion frontend such ascompany
orcorfu
callslsp-completion--annotate
to get a suffix, every call will issue an asynccompletionItem/resolve
request to modify the cached completion item in place while returning just a kind or an empty string initially, depending on some variables. This means the first completion popup will only have the kinds or simply no suffix at all, and then on the next refresh after a selection change, in the case of company, all of the candidates in the pop up will suddenly be annotated, and in the case of corfu, the previous selection will suddenly be annotated. In both cases the popup width will suddenly expand greatly, often times as wide as the window size. This is fundamentally because lsp-mode assumes the partial completion item response fromtextDocument/completion
is meant to be used the same way as the fully resolved completion item response fromcompletionItem/resolve
.This PR reimplements
lsp-completion--make-item
,lsp-completion--annotate
andlsp-completion--get-documentation
to separate the two different usages. In addition, the signature fromdetail
is now prepended to the document if it has not been prepended by the language server already.LSP ts-ls
LSP pyright
LSP gopls
LSP rust-analyzer
LSP jdtls