Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not resolve during annotate, enrich documentation with details (Fix #3201) (Fix #3905) #4625

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

wyuenho
Copy link
Contributor

@wyuenho wyuenho commented Nov 27, 2024

Problem

When using typescript-language-server, the initial call to textDocument/completion does not return any detail or documentation for any of the completion items. I suppose the reason for this is many Javascript signatures are extremely long, often they are 5x to 10x longer than the label, they are unreadable when displaying beside the label on one line, so the server forces the client to make completionItem/resolve requests to resolve the item detail and documentation individually, and it's up to the client to prepend the signature to the documentation, as is done in VS Code.

VS Code Typescript

Screenshot 2024-11-27 at 12 37 00 AM

This approach presents a problem to lsp-mode in that the CAPF function caches the partial completion item response as a text property on each candidate string, and when a completion frontend such as company or corfu calls lsp-completion--annotate to get a suffix, every call will issue an async completionItem/resolve request to modify the cached completion item in place while returning just a kind or an empty string initially, depending on some variables. This means the first completion popup will only have the kinds or simply no suffix at all, and then on the next refresh after a selection change, in the case of company, all of the candidates in the pop up will suddenly be annotated, and in the case of corfu, the previous selection will suddenly be annotated. In both cases the popup width will suddenly expand greatly, often times as wide as the window size. This is fundamentally because lsp-mode assumes the partial completion item response from textDocument/completion is meant to be used the same way as the fully resolved completion item response from completionItem/resolve.

This PR reimplements lsp-completion--make-item, lsp-completion--annotate and lsp-completion--get-documentation to separate the two different usages. In addition, the signature from detail is now prepended to the document if it has not been prepended by the language server already.

LSP ts-ls

Screenshot 2024-11-27 at 12 55 04 AM

LSP pyright

Screenshot 2024-11-27 at 12 49 31 AM

LSP gopls

Screenshot 2024-11-27 at 12 50 57 AM

LSP rust-analyzer

Screenshot 2024-11-27 at 12 46 23 AM

LSP jdtls

Screenshot 2024-11-27 at 12 47 45 AM

@kiennq
Copy link
Member

kiennq commented Nov 27, 2024

This is fundamentally because lsp-mode assumes the partial completion item response from textDocument/completion is meant to be used the same way as the fully resolved completion item response from completionItem/resolve.

This PR reimplements lsp-completion--make-item, lsp-completion--annotate and lsp-completion--get-documentation to separate the two different usages.

The implementation in this PR relies on the auto-documentation being automatically triggered. I would like to avoid that and always have the candidate resolved as it's displayed. The annotation update is called for displayed candidates and is a good function to trigger resolving asynchronously.

Also, we can configure the client's capability to not have partial completion item responses at all. The reason why we make it have partial completion item responses is to make the completion list return as quickly as possible without unnecessary and/or large item property strings. The other properties can be retrieved later with completionItem/resolve and should be treated as updated completion items.

Please see #4591 for the issue with completion items not being resolved without the document's update as well.

To avoid the width suddenly changing, I think the user can disable lsp-completion-show-detail. Alternatively, we can trigger a candidate list rendering refresh when the completionItem/resolve is done. The second approach will make lsp-mode behave like VS Code. I would prefer that if it's easy to do.

In addition, the signature from detail is now prepended to the document if it has not been prepended by the language server already.

This is a good feature. I agree we should do this.

lsp-completion.el Outdated Show resolved Hide resolved
@wyuenho
Copy link
Contributor Author

wyuenho commented Nov 27, 2024

I think we are beginning to see this one-size-fits-all lsp-completion-at-point implementation fail.

The implementation in this PR relies on the auto-documentation being automatically triggered. I would like to avoid that and always have the candidate resolved as it's displayed.

The documentation is already resolved synchronously in HEAD, I didn't change this, I only changed detail resolution. Detail should be resolved when it's needed, which is largely determined by the server. Even when detail is added to resolveSupport, I have not seen a server honor it by skipping this property in the response of textDocument/completion from what I can see.

I would like to avoid that and always have the candidate resolved as it's displayed. The annotation update is called for displayed candidates and is a good function to trigger resolving asynchronously.

The problem is exactly because the first time the candidates are displayed, they may not be resolved, there is also no guarantee they will be resolved the next time they are displayed, or the third time, they will be resolved whenever the server feels like it's time to send back a response because, asynchronicity, so what ends up happening is the annotation appearing in the completion popup erratically.

Also, we can configure the client's capability to not have partial completion item responses at all. The reason why we make it have partial completion item responses is to make the completion list return as quickly as possible without unnecessary and/or large item property strings.

Nobody is arguing with that, for languages where it makes sense, like TypeScript or Python, the language servers often do not send down detail and documentation in textDocument/completion anyway. The reason why these 2 properties have always supported lazy resolution is because they can be slow to generate or are extremely long for some languages. In general, you don't need to specify detail and documentation in resolveSupport, the servers will decide to send them in textDocument/completion when it makes sense, so most of the time, they have no effect anyway. The only exception is JDTLS, where you have to specify documentation to get the docs, but otherwise I have not seen a server skipping detail in textDocument/completion just because you've specified detail in resolveSupport.

The other properties can be retrieved later with completionItem/resolve and should be treated as updated completion items.

Fine, but they should not affect how the completion candidate list is displayed, but only how text are inserted or replaced, and displaying documentation. Resolving for insertion, replacement, indentation etc are already done in the exit function. If you want to speed up insertion in case resolution in the exit function is slow, you can call lsp-completion--resolve-async in lsp-completion-at-point for each item. This has the possibly of spamming the server, and I suppose JDTLS might not like that, so the alternative could be supporting itemDefaults. Regardless, this is a separate issue that requires experimentation in a separate PR. My only concern is the annotation function should not use the resolved item. Detail retrieved from completionItem/resolve should only be prepended to the documentation. This should lay the ground work for further optimization, e.g. itemDefaults.

To avoid the width suddenly changing, I think the user can disable lsp-completion-show-detail.

This is crazy. Are you suggesting that every user should adjust this defcustom buffer-local in mode hooks as opposed to simply shipping with a default behavior that makes sense for the vast majority if not all cases?

Alternatively, we can trigger a candidate list rendering refresh when the completionItem/resolve is done. The second approach will make lsp-mode behave like VS Code. I would prefer that if it's easy to do.

CAPF is pull-based. How do you "trigger a refresh" of all the completion frontends now and in the future? Also, what does it have to do with VS Code?

@wyuenho wyuenho requested a review from kiennq November 27, 2024 18:21
@kiennq
Copy link
Member

kiennq commented Nov 27, 2024

In general, you don't need to specify detail and documentation in resolveSupport, the servers will decide to send them in textDocument/completion when it makes sense, so most of the time, they have no effect anyway. The only exception is JDTLS, where you have to specify documentation to get the docs, but otherwise I have not seen a server skipping detail in textDocument/completion just because you've specified detail in resolveSupport.

The rust-analyzer (nightly) is supporting that and will skip detail and document if it's specified in resolveSupport. That's also the issue in #4591.

The spec from LSP said that

By default, the request can only delay the computation of the detail and documentation properties. Since 3.16.0, the client can signal that it can resolve more properties lazily. This is done using the completionItem#resolveSupport client capability which lists all properties that can be filled in during a ‘completionItem/resolve’ request. All other properties (usually sortText, filterText, insertText and textEdit) must be provided in the textDocument/completion response and must not be changed during resolve.

So, from 3.16, instead of the default detail and document, more properties can be lazily resolved, and it's entirely depended on the language server to support that. I will not be surprised if there's new language server that takes advantage of that and implement lazy-resolving as much as possible.

I think we are beginning to see this one-size-fits-all lsp-completion-at-point implementation fail.
This is crazy. Are you suggesting that every user should adjust this defcustom buffer-local in mode hooks as opposed to simply shipping with a default behavior that makes sense for the vast majority if not all cases?

I think as long as we provide enough customization for the user, it would be okay as there's no one-size-fits-all solution. The default should be as close to the VsCode behavior as possible. So, if the VsCode doesn't do the candidate annotation (which Emacs does) then we should configure lsp-completion-show-detail as nil by default instead. And this defcustom is not buffer-local btw.
Although I would argue that since showing the detail right beside the candidate has been a default configuration for a long time, suddenly change it to nil might cause confusion.

The problem is exactly because the first time the candidates are displayed, they may not be resolved, there is also no guarantee they will be resolved the next time they are displayed, or the third time, they will be resolved whenever the server feels like it's time to send back a response because, asynchronicity, so what ends up happening is the annotation appearing in the completion popup erratically.

I'm not sure but the behavior of showing document pop can be argued as erratically as well, as it suddenly appears, blocking since the user will experience hang if the server is slow to return the result, unlike lsp-completion--resolve-async. The blocking can be justified if the user triggers that intentionally, but it would be hammering if it's triggered automatically, for example due to company-posframe-quickhelp-delay or company-auto-update-doc.
If we think of the annotation is something that will be filled asynchronously, then it's suddenly filled at a later time while the user is browsing the candidate list will not be surprised at all. Perhaps show more visual indicators for that (a loading gif?? or place holder for annotation string) would help?

CAPF is pull-based. How do you "trigger a refresh" of all the completion frontends now and in the future? Also, what does it have to do with VS Code?

This would be capf but rather company-mode or corfu. If they have method to support refresh their candidate lists, we can use that. The lsp-mode will try to default to what VsCode is configured by default, so if they do lazy-resolving for candidate annotation then we should follow that (I haven't checked this btw).

I think that your main argument is that we shouldn't treat the resolved completion item and the original completion item as a same entity and always use the original completion even if it's lacking information. My counterargument is that they're the same and we should use the latest information if possible. The reason is that it provides more information to the user.
I would invite other maintainers (@yyoncho @ericdallo @jcs090218 ...) to chime in and provide their opinions on this as well.

Btw, here is an example behavior with a place-holder on annotation string.

b9b4494e-59c2-48da-b1b9-95f42d539ef6

The code change

(defun lsp-completion--annotate (item)
  "Annotate ITEM detail."
  (-let* (((&plist 'lsp-completion-item completion-item
                   'lsp-completion-resolved resolved)
           (text-properties-at 0 item))
         ((&CompletionItem :detail? :kind? :label-details?) completion-item))
    (lsp-completion--resolve-async item #'ignore)

    (concat (when lsp-completion-show-detail
              (if resolved
                  (when detail? (concat " " (s-replace-regexp "\r" "" detail?)))
                " <loading...>"))
            (when (and lsp-completion-show-label-description label-details?)
              (when-let* ((description (and label-details? (lsp:label-details-description label-details?))))
                (format " %s" description)))
            (when lsp-completion-show-kind
              (when-let* ((kind-name (and kind? (aref lsp-completion--item-kind kind?))))
                (format " (%s)" kind-name))))))

@wyuenho
Copy link
Contributor Author

wyuenho commented Nov 27, 2024

The rust-analyzer (nightly) is supporting that and will skip detail and document if it's specified in resolveSupport. That's also the issue in #4591.

Ok, this PR works just as well. When the initial partial completion item has no detail, after a resolution, the detail will be prepended to the document.

So, from 3.16, instead of the default detail and document, more properties can be lazily resolved, and it's entirely depended on the language server to support that. I will not be surprised if there's new language server that takes advantage of that and implement lazy-resolving as much as possible.

So load them when you need them, I don't know why we keep circling back to this. This PR has nothing to do with these other lazily resolved properties. It's already done in the exit function.

The default should be as close to the VsCode behavior as possible. So, if the VsCode doesn't do the candidate annotation (which Emacs does) then we should configure lsp-completion-show-detail as nil by default instead. And this defcustom is not buffer-local btw.

I believe the central issue here is, ts-ls doesn't always return detail in the response of textDocument/completion, and when it does, the detail is not a type signature, so the user shouldn't even attempt to set lsp-completion-show-detail to nil, either buffer-locally or worse, globally. When it doesn't return any detail, the popup width jumps erratically after resolving the first item. When it does return some detail, there's no easy and efficient way to tell what's in it. The only acceptable default is leave lsp-completion-show-detail to t globally, and deal with showing the type in the document after resolution.

Screenshot 2024-11-27 at 9 15 58 PM

I'm not sure but the behavior of showing document pop can be argued as erratically as well, as it suddenly appears, blocking since the user will experience hang if the server is slow to return the result, unlike lsp-completion--resolve-async.

I don't understand this sentence. Can you rephrase? The response to textDocument/completion is designed to be fast and is often tuned to be fast by default by the language servers, hence the lazy properties. It's perfectly fine to block here.

The blocking can be justified if the user triggers that intentionally, but it would be hammering if it's triggered automatically, for example due to company-posframe-quickhelp-delay or company-auto-update-doc.

Yes, that's why lsp-completion--get-documentation sends a blocking request to completionItem/resolve. There's nothing wrong with it, what wrong is you are attempting to resolve the entire item in the annotation function. It doesn't matter whether you do it synchronously or asynchronously, it just shouldn't happen as you are interfering how the server intends how the completion list is displayed.

If spamming the server is a problem, these completion frontend should implement debounce with run-with-idle-timer. This should not be a responsibility for lsp-mode. I've tried this PR on the server that's really sensitive to spamming - JDTLS, it's perfectly fine, as the total number of requests is actually reduced by not resolving until absolutely necessary. What's in HEAD however, is much worse as for company, the annotation function is called when constructing the candidate list for all candidates on the first page after the first selection change. The PR actually improves on this situation by really resolving the detail when needed.

If we think of the annotation is something that will be filled asynchronously, then it's suddenly filled at a later time while the user is browsing the candidate list will not be surprised at all.

Yes, which is already handled when corfu-popupinfo/corfu-info/company-quickhelp etc calls lsp-completion--get-documentation. The VS Code behavior is to prepend the detail to the documentation after resolving, and leave the completion popup alone. (Edit: when "Show More" is active)

This would be capf but rather company-mode or corfu. If they have method to support refresh their candidate lists, we can use that.

I beg you please don't even try this. I'm working on corfu-pixel-perfect, it does have the ability to refresh, but it's a little complicated for vanilla corfu. I think company box has this ability as well, but not sure about company. Basically, don't do this as this is highly dependent on third-party packages. You don't need it, and the outcome is undesirable as the width will either erratically expand or the candidate line is squished and truncated in all sorts of ways.

The lsp-mode will try to default to what VsCode is configured by default, so if they do lazy-resolving for candidate annotation then we should follow that

VS Code doesn't change the popup list after selecting a different candidate and showing the documentation popup...

Btw, here is an example behavior with a place-holder on annotation string.

I don't think this is VS Code's behavior...

@wyuenho wyuenho force-pushed the do-not-resolve-during-annotate branch from e30d4dd to 5bc2096 Compare November 28, 2024 11:51
@wyuenho
Copy link
Contributor Author

wyuenho commented Nov 28, 2024

Ok here's more information. It turns out, VS Code remembers the last value of ^SPC (Show more or less), and the way to change it is hidden in a hint in the status bar which is off by default.

ezgif com-optipng

When "Show More" is active, the detail is prepended to the documentation. When "Show Less" is active, the detail is rendered on the popup menu on selection if it is not in the response from textDocument/completion. When the user selects a different completion item, the detail on the last selection will be removed. This means that VS Code does make requests to completionItem/resolve on selection, but the response from textDocument/completion and completionItem/resolve are still used differently.

In addition, if a completion item has no detail from textDocument/completion, but has detail but no documentation from completionItem/resolve, the user cannot change from "Show Less" to "Show More" when selecting that item. The user must select another item that has documentation before he can change back to "Show More".

In order to accomplish this in lsp-mode, we will need to cooperate with completion frontends, I guess this is where your idea of "refresh" comes in. What we can do is, keep the separation of unresolved and resolved completion item as done in this PR , do not resolve async when the annotation function is called, but instead, the completion frontends should call lsp-completion--resolve[-async] so they can surgically refresh the completion item on selection change. Does this make sense? This is a UI problem that lsp-completion-mode should not try to solve all by itself.

@wyuenho wyuenho force-pushed the do-not-resolve-during-annotate branch 2 times, most recently from 1111c94 to b305fdd Compare November 30, 2024 20:33
wyuenho added a commit to wyuenho/emacs-corfu-pixel-perfect that referenced this pull request Nov 30, 2024
@wyuenho
Copy link
Contributor Author

wyuenho commented Nov 30, 2024

This is how reverse engineering VS Code's behavior results in in corfu-pixel-perfect when combined with this PR.

ScreenRecording2024-11-30at8 48 37PM-ezgif com-optipng

@wyuenho
Copy link
Contributor Author

wyuenho commented Nov 30, 2024

More reasons to separate the unresolved and resolved completion item: the details for the same label can be different in the responses in textDocument/completion and completionItem/resolve.

textDocument/completion

{
      "data": {
        "cacheId": 964
      },
      "detail": "@nestjs/common/utils/shared.utils",
      "kind": 6,
      "label": "isObject",
      "sortText": "�16",
      "textEdit": {
        "insert": {
          "end": {
            "character": 3,
            "line": 2
          },
          "start": {
            "character": 0,
            "line": 2
          }
        },
        "newText": "isObject",
        "replace": {
          "end": {
            "character": 3,
            "line": 2
          },
          "start": {
            "character": 0,
            "line": 2
          }
        }
      }
    }

completionItem/resolve

[Trace - 09:50:15 PM] Received response 'completionItem/resolve - (20)' in 534ms.
Result: {
  "additionalTextEdits": [
    {
      "newText": "import { isObject } from '@nestjs/common/utils/shared.utils';\n\n",
      "range": {
        "end": {
          "character": 0,
          "line": 0
        },
        "start": {
          "character": 0,
          "line": 0
        }
      }
    }
  ],
  "data": {
    "entryNames": [
      {
        "data": {
          "exportMapKey": "8 4590 isObject ",
          "exportName": "isObject",
          "fileName": "/Users/wyuenho/nest/packages/common/utils/shared.utils.ts",
          "moduleSpecifier": "@nestjs/common/utils/shared.utils"
        },
        "name": "isObject",
        "source": "@nestjs/common/utils/shared.utils"
      }
    ],
    "file": "/Users/wyuenho/nest/packages/core/repl/index.ts",
    "line": 3,
    "offset": 4
  },
  "detail": "Auto import from '@nestjs/common/utils/shared.utils'\nconst isObject: (fn: any) => fn is object",
  "kind": 6,
  "label": "isObject",
  "sortText": "�16",
  "textEdit": {
    "insert": {
      "end": {
        "character": 3,
        "line": 2
      },
      "start": {
        "character": 0,
        "line": 2
      }
    },
    "newText": "isObject",
    "replace": {
      "end": {
        "character": 3,
        "line": 2
      },
      "start": {
        "character": 0,
        "line": 2
      }
    }
  }
}

@wyuenho
Copy link
Contributor Author

wyuenho commented Dec 1, 2024

I've just changed back to always resolve when getting the documentation as the resolved detail may be different from unresolved detail, even when both are non-empty. This should be the last bit required to reverse engineer VS Code's auto-completion UI.

Screenshot 2024-12-01 at 12 14 44 AM

@wyuenho wyuenho force-pushed the do-not-resolve-during-annotate branch 4 times, most recently from b072fe5 to 28cb228 Compare December 2, 2024 09:05
@wyuenho
Copy link
Contributor Author

wyuenho commented Dec 8, 2024

@dgutov moving the slightly off-topic convo from #4591 (comment) to here.

This is what's happening to company using lsp-mode since #4610

ScreenRecording2024-12-08at4 58 56PM-ezgif com-optipng

The problem is, unlike corfu, company doesn't make a copy of the candidate strings before refreshing. Since #4610, any call to the annotation function will stealthily async resolve the completion item in the background, so if reusing the same string references while refreshing, the lsp-completion-item text property will be filled and the annotation function will take the types from it to return to company. The concrete proposal is to make a copy of a "page" of the candidate strings when the popup is active, so you can at least achieve an although still undesirable, but less bad effect similar to Corfu.

ScreenRecording2024-12-08at5 18 20PM-ezgif com-optipng

@dgutov
Copy link
Contributor

dgutov commented Dec 10, 2024

This is what's happening to company using lsp-mode since

Ouch, that's not great. Does that happen only with some language servers, e.g. the Rust one?

The concrete proposal is to make a copy of a "page" of the candidate strings when the popup is active, so you can at least achieve an although still undesirable, but less bad effect similar to Corfu

If we copy the strings, then I guess that would mean dropping the eq permanence of the completions strings - meaning for example that using hash-tables with :test 'eq might start to work differently in some backends. Not entirely a deal breaker, but avoiding it would be preferable.

so if reuse the same string references while refreshing, the lsp-completion-item text property will be filled and the annotation function will take the types from it to return to company

Do both LSP clients retain the full information in the text properties?

If there was at least some indirection involved (e.g. a hash table to do a lookup), the refresher callback could replace the contents of the said hash table instead.

@wyuenho
Copy link
Contributor Author

wyuenho commented Dec 10, 2024

Ouch, that's not great. Does that happen only with some language servers, e.g. the Rust one?

Theoretically this can happen to any language server. There's no guarantee the detail for a completion item from textDocument/completion will be the same as the detail for the same completion item from completionItem/resolve. The known servers for me that exhibit this behavior is currently typescript-language-server, and rust-analyzer nightly, but I suspect that's a bug for rust-analyzer, while this behavior is necessary for Typescript. Various language servers for slow dynamic languages solve the performance problem for looking up the type signatures when generating a response for textDocument/completion in similar ways, Pyright for example never returns the detail but only prepend the types to the documentation when responding to completionItem/resolve, I suspect some Ruby LSP servers take the typescript-language-server or the pyright approach as well.

If we copy the strings, then I guess that would mean dropping the eq permanence of the completions strings - meaning for example that using hash-tables with :test 'eq might start to work differently in some backends. Not entirely a deal breaker, but avoiding it would be preferable.

TBH, if you are comparing strings with eq, the bug is that you are comparing strings with eq.

Do both LSP clients retain the full information in the text properties?

If by both you mean lsp-mode and eglot, the answer is yes they both store the partial completion item from textDocument/completion as text properties, but eglot never modifies them, only lsp-mode since #4610 does to "complete" it.

If there was at least some indirection involved (e.g. a hash table to do a lookup), the refresher callback could replace the contents of the said hash table instead.

This is the naive solution that everybody keeps coming up with that leads to the exact problem I want to solve in this PR. The culprit is not how caching the completion item data is achieved, the problem is the "refresher callback" (I guess you mean the resolution), should not replace the partial cache. Eglot conveniently sidesteps this problem by not resolving in the :annotation-function, but by supporting :company-docsig instead. The downside to this approach is, :company-docsig is not widely supported, as the only known completion frontend that uses it is paradoxically a seldom used corfu extension 😅, and that potentially every time the documentation is popped up, 2 instead of 1 resolution requests are made.

This PR will solve the problem described in this comment fundamentally and you don't need to do anything about it. I'm just letting you know that's what's happening to company now and the way it is implemented has inadvertently triggered an N+1 request when constructing the candidate list for the popup. And that when this PR is merged, you can use the now public lsp-complete-resolve function to achieve similar effects as this.

@kiennq
Copy link
Member

kiennq commented Dec 10, 2024

I think I got your point now, the detail can change during item resolution process so it's better to keep the unresolved detail (if existed) for the candidate. So, if the unresolved item has no detail before resolving, it should use resolved item's detail to display instead. I think that would solve the issue of both RA and ts-ls.
What do you think about that?

I still believe we should treat the resolved item as the completed version of the unresolve item. So only for items that's displayed immediately (like detail), we can keep the unresolved version of it and use the resolved item as much as possible.
Of course, we can try to resolve the item on the document hover, however that's sync call and will block the user until the server has return.
Doing stealth resolution is async operation, so the document retrieve will not be a blocking.
The textDocument/completion even it fast, we have to allow to cancel it on keyboard input to keep the responsiveness of Emacs. And I don't think the completionItem/resolve is tuned to be fast here.

I'm not sure but the behavior of showing document pop can be argued as erratically as well, as it suddenly appears, blocking since the user will experience hang if the server is slow to return the result, unlike lsp-completion--resolve-async.

I don't understand this sentence. Can you rephrase? The response to textDocument/completion is designed to be fast and is often tuned to be fast by default by the language servers, hence the lazy properties. It's perfectly fine to block here.

One thing I can think of with stealth resolution is that the server doesn't like being spammed about completion item resolve request. But I haven't observed any server like that so far. The communication between LSP client-server can be chatty so I would expect the server to be able to handle that gracefully.

@wyuenho
Copy link
Contributor Author

wyuenho commented Dec 10, 2024

I think I got your point now, the detail can change during item resolution process so it's better to keep the unresolved detail (if existed) for the candidate. So, if the unresolved item has no detail before resolving, it should use resolved item's detail to display instead. I think that would solve the issue of both RA and ts-ls. What do you think about that?

It's a little subtler than that. The when, where, and how to display the resolved detail matters. This PR only displays the resolved detail when the user requests for documentation, just like VS Code does.

I still believe we should treat the resolved item as the completed version of the unresolve item. So only for items that's displayed immediately (like detail), we can keep the unresolved version of it and use the resolved item as much as possible.

Yes, that's exactly what this PR does.

Of course, we can try to resolve the item on the document hover, however that's sync call and will block the user until the server has return. Doing stealth resolution is async operation, so the document retrieve will not be a blocking.

I feel like we are going in circles. That sync resolve call in lsp-completion--get-documentation has always been there for the simple reason of there's nothing to display if there's no documentation in the completion item. I've even cached the constructed documentation now so no second sync resolve call is necessary. What exactly is the problem you are trying to solve?

If the reason you put that async resolve call in the annotation function is to achieve some kind of "prefetch", all I can tell you is, the only good opportunity to do a "prefetch" is immediately after receiving the response of `textDocument/completion", but if you do that, you'll be issuing N+1 requests, but probably throwing 99% of the responses away on every keystroke. This is exceedingly wasteful for both the server and Emacs for practically no benefits. The users don't need the resolved data if he hadn't asked for it. Stop trying to second-guess the user. There's no good way to know when a user needs what data until he tells you explicitly by performing some UI interaction.

Don't solve problems that don't exist, don't optimize for things that nobody had asks for.

The textDocument/completion even it fast, we have to allow to cancel it on keyboard input to keep the responsiveness of Emacs. And I don't think the completionItem/resolve is tuned to be fast here.

What's the relevance of this sentence to the issues discussed here? The whole reason for the existence of completionItem/resolve is to be fast, the whole reason some servers do not return detail and documentation in textDocument/completion is to be fast, also, lsp-mode has been able to let the users interrupt with input in the middle of a sync request for many years, just stop worrying and let the servers do their jobs?

One thing I can think of with stealth resolution is that the server doesn't like being spammed about completion item resolve request. But I haven't observed any server like that so far. The communication between LSP client-server can be chatty so I would expect the server to be able to handle that gracefully.

Have you tries this with jdtls? I can guarantee you it'll crash in seconds. It can barely handle all the document/hover and textDocument/codeAction calls. ts-ls often choke as well.

@kiennq
Copy link
Member

kiennq commented Dec 10, 2024

The whole reason for the existence of completionItem/resolve is to be fast,

Theres' no guarantee on that for that to be fast. The function you mention is different from lsp-request, it's lsp-request-while-no-input which is used in case of requesting completions.
The completion item resolve is done using lsp-request, which has no input interruption.

but probably throwing 99% of the responses away on every keystroke.

This is not done on every keystroke; it's only done when you have a new completion set.

I can guarantee you it'll crash in seconds. It can barely handle all the document/hover and textDocument/codeAction calls. ts-ls often choke as well.

I've tried with ts-ls and notice no difference so far, if you have a repo to share that encounter issue with this, I would like to try.

@kiennq
Copy link
Member

kiennq commented Dec 10, 2024

Another thing to add is the async request to resolve the completion item is done off the hot path, no blocking to the user and not while the user is waiting for new completion items.
It's wasteful, yes, but it's done so Emacs doesn't being blocked and I would like to keep it that way if possible.

@wyuenho
Copy link
Contributor Author

wyuenho commented Dec 10, 2024

Theres' no guarantee on that for that to be fast. The function you mention is different from lsp-request, it's lsp-request-while-no-input which is used in case of requesting completions. The completion item resolve is done using lsp-request, which has no input interruption.

Well, everything in LSP is best effort. If you need to keep Emacs responsive, change it to lsp-request-while-no-input? Anyway, this is outside of the scope of the issue I want to solve. You've been tuning lsp-mode perf for 6 years now, if it was easy, it would have been done by now.

This is not done on every keystroke; it's only done when you have a new completion set.

You will get a completely new set on every keystroke if the server does not support isInComplete. It's quite common, but probably not from the major servers for the major languages.

if you have a repo to share that encounter issue with this, I would like to try.

Just try editing the typescript-languager-server repo itself with lsp-mode master, turn on company and company-quickhelp, and use ts-ls for TypeScript files. Pick a file with at least a couple hundred of lines, Type "Obj", backspace 3 times, "Arra", backspace, M-n M-p a couple of times, just simulate a burst of editing for a couple of seconds. Then look at the ts-ls lsp logs. Every completionItem/resolve is taking 250+ms because the server is getting overloaded. The only thing worse than computing everything in textDocument/completion is getting asked to compute everything in N+1 requests.

With this PR, lsp-mode doesn't spam the server anymore. Every completionItem/resolve request takes like 3-5ms, occasionally you get a 30+ms response and that's about it.

Another thing to add is the async request to resolve the completion item is done off the hot path

Ah, no. Did you not see what that async resolve did to company? That's a page of completionItem/resolve requests per textDocument/completion requests. Even with Corfu there are still N+1 requests, you just don't see the effect because Corfu makes a copy of the candidate strings before rendering into the popup, but the requests were still blasted out in the background.

What exactly are you trying to archive with async resolve in the annotation function? You never answered me this question.

The important things like insertText and textEdit are not in resolveSupport and so you don't need to resolve on every completion insert. Most of the time the only things you want to resolve are the detail and documentation, what's wrong with blocking in lsp-completion--get-documentation? company-quickhelp, company-box and corfu-popupinfo all use a timer so it's not like a blocking call to the server is made on every M-n/M-p. When the user stops at a candidate for some delay, he probably really wants to see the documentation and is willing to wait for the docs, so blocking is exactly the right thing to do.

There's no need to prefetch.

@dgutov
Copy link
Contributor

dgutov commented Dec 11, 2024

There's no guarantee the detail for a completion item from textDocument/completion will be the same as the detail for the same completion item from completionItem/resolve.

So... would the solution be to use the one or the other for resolving annotations? Sorry I don't have the full context right now.

Pyright for example never returns the detail but only prepend the types to the documentation when responding to completionItem/resolve

Okay, but what I see on the first gif is completions being annotate by a wrong string, in bulk. Does that happen to having the same strings used in some other place? Setting aside the "incorrectness" of having the non-owned strings altered like that, which other feature could require such bulk-requesting, rather than annotation?

TBH, if you are comparing strings with eq, the bug is that you are comparing strings with eq.

We're talking about comparing identical string references with eq, right? ISTR that this might be somewhat broken with Company already (which would justify changing the behavior), but otherwise there doesn't seem anything terrible with that approach.

If by both you mean lsp-mode and eglot, the answer is yes they both store the partial completion item from textDocument/completion as text properties, but eglot never modifies them, only lsp-mode since #4610 does to "complete" it.

Thanks for confirming.

The downside to this approach is, :company-docsig is not widely supported, as the only known completion frontend that uses it is paradoxically a seldom used corfu extension 😅

Aside from its use in Company, you mean.

and that potentially every time the documentation is popped up, 2 instead of 1 resolution requests are made

It might be fine, though? If the resolution request is fast enough to be done 10 times in a row, that is. Anyway...

This PR will solve the problem #4625 (comment) fundamentally and you don't need to do anything about it. I'm just letting you know that's what's happening to company now and the way it is implemented has inadvertently triggered an N+1 request when constructing the candidate list for the popup.

Thanks! [Hopefully N was closer to the length of the popup than the length of the whole completions list.]

So the problem is fixable without additional fixes in the frontend, do I get that right? That's a good news.

And that when this PR is merged, you can use the now public lsp-complete-resolve function to achieve similar effects as #4625 (comment).

This does look pretty useful, especially since the main target of this feature probably was the configuration when the documentation popup is disabled (VSC has a shortcut for toggling that).

Using an lsp-mode's function directly from Company (or other frontends) doesn't seem advisable, but there are possible ways to have it passed indirectly.

First of all, using the docsig feature which you already mentioned. It's a callback that's already passed through company-capf and used by elisp-completion-at-point and python-shell-completion-at-point.

Or if it has problems, some other prop-function could be added after we choose a name and description. Async or not.

@dgutov
Copy link
Contributor

dgutov commented Dec 17, 2024

Yep, that's why I notified you so we can both change the completion UIs to replicate VS Code's behavior, but leaving :company-docsig to echo is fine for now, as it sidesteps the need to eagerly resolve the detail for H unresolved items.

To rewind a bit: I meant to list the possible methods that will results in "detail" being rendered on every line of the popup.

Which seemed to me @kiennq's UI preference, if I'm recalling it right from his other comments somewhere. I think is a valid preference, just a difficult one to implement language server agnostically, given the current state of the affairs.

Using the echo area, as you mention, is functionally equivalent to printing it on the selected line, as far as the current discussion goes.

@wyuenho
Copy link
Contributor Author

wyuenho commented Dec 17, 2024

@dgutov To rewind a bit, I meant using :company-docsig only when that callback is called, which is completely optional, for completion frontends to implement the "Show Less" behavior similar to VS Code's completion popup, which should only need to resolve the selected item one at a time synchronously. I would never and have never advocated the resolution of every item under any circumstances. Does it make sense?

This means, if we are to agree that we should prefer to implement, or allow for the ability to implement VS Code's behavior, which I think we do, the loading text on every line of the popup is unnecessary, as we clearly do not want eager resolution, async or otherwise to occur by default.

I do, however, recognize that VS Code seems to have some secret sauce (the exact logic is nowhere to be found in open source version) that will eagerly resolve some limited amount of items, but that to me is not something desirable due to multiple roundtrips, I don't think it's required by the spec either. I also have a hard time finding an actual example of this parameter being used in the wild.

Screenshot 2024-12-17 at 11 26 26 AM

In any case, if you so wish, there's now a :company-docsig callback in this PR. I'm not going to call it eagerly in corfu-pixel-perfect, what you do with it in company is up to you, if this PR is merged.

@dgutov
Copy link
Contributor

dgutov commented Dec 18, 2024

I meant using :company-docsig only when that callback is called, which is completely optional, for completion frontends to implement the "Show Less" behavior similar to VS Code's completion popup, which should only need to resolve the selected item one at a time synchronously. I would never and have never advocated the resolution of every item under any circumstances. Does it make sense?

Sure. Just when discussing backend features, it helps to consider different UIs people might want to be able to build.

This means, if we are to agree that we should prefer to implement, or allow for the ability to implement VS Code's behavior, which I think we do, the loading text on every line of the popup is unnecessary, as we clearly do not want eager resolution, async or otherwise to occur by default.

I agree.

I do, however, recognize that VS Code seems to have some secret sauce (the exact logic is nowhere to be found in open source version) that will eagerly resolve some limited amount of items

I've seen people say this, but does VS Code actually pre-cache items, or resolve every completion's detail? As mentioned previously, that doesn't seem to be happening with typescript-language-server, at least: the "detail" is only shown for the current completion.

@dgutov
Copy link
Contributor

dgutov commented Dec 18, 2024

One more thing: in that Zulip thread, you've said that lsp-mode has to resolve every item. That doesn't match my testing: it only (with some exceptions) resolves the visible completions, so together with company-mode anyway it makes ~10 requests to "resolve" at a time.

See this Swiper output:

image

You can turn on request/response logging using M-x lsp-toggle-trace-io.

The trouble is, 10 times 23ms sequentially is still 230 ms extra, and if those are resolved in annotation-function, it's hard to do that in parallel because of data dependencies in our capf convention: annotation can affect sorting, and sorting comes before display. Not to mention the extra load on the server anyway.

Zed does seem to be fetching detail for all visible completions, at least since last week (maybe longer). This PR has some details and the textual comparison with VS Code in comments: zed-industries/zed#21705

EDIT: In the v0.166.1-pre build, Zed still seems to issue only one completionItem/resolve request per user keypress. Guess I misread that PR's description.

@wyuenho
Copy link
Contributor Author

wyuenho commented Dec 18, 2024

One more thing: in that Zulip thread, you've said that lsp-mode has to resolve every item.

Ha thanks, I clearly have misunderstood that question, cos I was a little bit perplexed at the time when Florian brought up VS Code to justify his interpretation of the spec requiring eager resolution, which is completely wrong, but I have completely misunderstood that question as well. I'll make a clarification over there.

@wyuenho
Copy link
Contributor Author

wyuenho commented Dec 18, 2024

I've seen people say this, but does VS Code actually pre-cache items, or resolve every completion's detail? As mentioned previously, that doesn't seem to be happening with typescript-language-server, at least: the "detail" is only shown for the current completion.

It doesn't have anything to do with typescript-language-server, VS Code's Typescript feature does not use it or even LSP. it's just a VS Code command. I do believe that VS Code has the ability to eagerly resolve some limited number of items, but I can't even find 1 example in any VS Code extensions, or the open source portion of VS Code to prove this. The easy way to prove this is to look at the logs from the language server side when debugging, but I'm too lazy to do that :P

Anyway, it doesn't matter, it's not required by the spec and doesn't affect any desirable outcome. Even the VS Code docs seems to suggest one does best to avoid eager resolution.

@wyuenho wyuenho force-pushed the do-not-resolve-during-annotate branch from 1720f2d to 20c3f98 Compare December 20, 2024 11:15
@brownts
Copy link
Contributor

brownts commented Dec 20, 2024

The reason why we make it have partial completion item responses is to make the completion list return as quickly as possible without unnecessary and/or large item property strings. The other properties can be retrieved later with completionItem/resolve and should be treated as updated completion items.

I wasn't sure if this was the best place to ask this question, but it seems like the right people are here that could answer it, especially if this particular area of lsp-mode is going to be modified. If I should create a new issue, let me know, but I thought it was relevant since changes in this area are being discussed.

I'm seeing an issue which I believe is related to the to lazy resolution of the completion detail. I'm using an Ada Language Server which does not initially provide the detail in the "textDocument/completion" response. As a result, the detail is initially missing from the completion UI. The problem I'm seeing is that overloaded functions don't appear for all instances, only one instance is displayed in the company-mode completion UI. All instances do appear in the Corfu UI though. I believe it is due to the fact that company-mode performs de-duplication of candidates. In order to do that, it takes into account the result of the annotation. However, since the lsp-mode annotation function doesn't initially respond with the detail (since it's performed asynchronously), my understanding is that since the returned annotation won't be different between the instances of the overloaded functions, company-mode will remove them as duplicates.

The only way I was able to get all instances to show up was to add advice around company-capf to remove the hard-coded "duplicates" response.

(defun init.el/company-capf (oldfun &rest r)
  (unless (eq (car r) 'duplicates)
    (apply oldfun r)))
(advice-add 'company-capf :around #'init.el/company-capf)

It seems like either lsp-mode needs to resolve this detail before or during the annotation function being called or company-mode needs to have (an official) way to disable de-duplication for company-capf, but maybe I've overlooked some other solution.

@wyuenho wyuenho force-pushed the do-not-resolve-during-annotate branch from 20c3f98 to 5dc3030 Compare December 20, 2024 22:07
@wyuenho
Copy link
Contributor Author

wyuenho commented Dec 21, 2024

The problem I'm seeing is that overloaded functions don't appear for all instances, only one instance is displayed in the company-mode completion UI. All instances do appear in the Corfu UI though. I believe it is due to the fact that company-mode performs de-duplication of candidates. In order to do that, it takes into account the result of the annotation. However, since the lsp-mode annotation function doesn't initially respond with the detail (since it's performed asynchronously), my understanding is that since the returned annotation won't be different between the instances of the overloaded functions, company-mode will remove them as duplicates.

Given #4644 was just merged, if you give master a try, does the Ada server return labelDetails? If so, what's in it? If the contents are different in the overloads, then we can surface them in the annotation for company to dedup.

It seems like either lsp-mode needs to resolve this detail before or during the annotation function being called or company-mode needs to have (an official) way to disable de-duplication for company-capf, but maybe I've overlooked some other solution.

Absolutely not. It turns out, VS Code probably hasn't used the detail field since the 3.17 spec was released, so that's 2 years now. The detail will only show up under Show Less mode when the user selects an item and trigger a completionItem/resolve. Which means, under no circumstances should we eagerly resolve any number of items to get the details.

That said, lsp-completion--annotate and lsp-completion--company-docsig in this PR should probably be revamped to replicate how VS Code displays each completion line while supporting servers that support < 3.17 and >= 3.17. I believe the correct implementation should be:

  1. If both the client and the server support labelDetail, and the response from textDocument/completion returns labelDetail, its content should be used exclusively.
  2. If the server does not support labelDetail, but returns detail from textDocument/completion, it and it alone should be used for the annotation. company-docsig should be used to resolve the completion item. If the server violates the spec by providing a different detail after resolution, company-docsig should use the resolved detail.
  3. If the server does not support labelDetail, and does not return detail from textDocument/completion, the annotation should be an empty string. Resolution logic is the same as 2).

@brownts
Copy link
Contributor

brownts commented Dec 21, 2024

Given #4644 was just merged, if you give master a try, does the Ada server return labelDetails? If so, what's in it? If the contents are different in the overloads, then we can surface them in the annotation for company to dedup.

Unfortunately it doesn't appear the server supports "labelDetails". The following shows a partial log for one particular overloaded instance including the resolve request/response.

[Trace - 09:08:35 AM] Received response 'textDocument/completion - (91)' in 27ms.

...

    {
      "label": "Put",
      "kind": 3,
      "sortText": "25&00086Put",
      "insertText": "Put (${1:Item : String})$0",
      "insertTextFormat": 2,
      "data": {
        "uri": "file:///home/troy/.local/share/alire/toolchains/gnat_native_14.2.1_06bb3def/lib/gcc/x86_64-pc-linux-gnu/14.2.0/adainclude/a-textio.ads",
        "range": {
          "start": {
            "line": 514,
            "character": 3
          },
          "end": {
            "line": 518,
            "character": 39
          }
        }
      }
    },

...

[Trace - 09:08:35 AM] Sending request 'completionItem/resolve - (134)'.
Params: {
  "label": "Put",
  "kind": 3,
  "sortText": "25&00086Put",
  "insertText": "Put (${1:Item : String})$0",
  "insertTextFormat": 2,
  "data": {
    "uri": "file:///home/troy/.local/share/alire/toolchains/gnat_native_14.2.1_06bb3def/lib/gcc/x86_64-pc-linux-gnu/14.2.0/adainclude/a-textio.ads",
    "range": {
      "start": {
        "line": 514,
        "character": 3
      },
      "end": {
        "line": 518,
        "character": 39
      }
    }
  }
}

...

[Trace - 09:08:35 AM] Received response 'completionItem/resolve - (134)' in 131ms.
Result: {
  "label": "Put",
  "kind": 3,
  "detail": "procedure Put (Item : String)",
  "documentation": "at a-textio.ads (515:4)",
  "sortText": "25&00086Put",
  "insertText": "Put (${1:Item : String})$0",
  "insertTextFormat": 2,
  "data": {
    "uri": "file:///home/troy/.local/share/alire/toolchains/gnat_native_14.2.1_06bb3def/lib/gcc/x86_64-pc-linux-gnu/14.2.0/adainclude/a-textio.ads",
    "range": {
      "start": {
        "line": 514,
        "character": 3
      },
      "end": {
        "line": 518,
        "character": 39
      }
    }
  }
}

Here are screenshots showing VSCode, lsp-mode/Company and lsp-mode/Corfu respectively for the same completion.

Screenshot_2024-12-21_06-29-21

Screenshot_2024-12-21_06-42-12

Screenshot_2024-12-21_07-00-45

Absolutely not. It turns out, VS Code probably hasn't used the detail field since the 3.17 spec was released, so that's 2 years now. The detail will only show up under Show Less mode when the user selects an item and trigger a completionItem/resolve. Which means, under no circumstances should we eagerly resolve any number of items to get the details.

That said, lsp-completion--annotate and lsp-completion--company-docsig in this PR should probably be revamped to replicate how VS Code displays each completion line while supporting servers that support < 3.17 and >= 3.17. I believe the correct implementation should be:

  1. If both the client and the server support labelDetail, and the response from textDocument/completion returns labelDetail, its content should be used exclusively.
  2. If the server does not support labelDetail, but returns detail from textDocument/completion, it and it alone should be used for the annotation. company-docsig should be used to resolve the completion item. If the server violates the spec by providing a different detail after resolution, company-docsig should use the resolved detail.
  3. If the server does not support labelDetail, and does not return detail from textDocument/completion, the annotation should be an empty string. Resolution logic is the same as 2).

I don't disagree with your suggested strategy, however it seems for the example I've shown above, it would fall into Step 3 above resulting in the overloaded functions being de-duplicated by Company...essentially no difference.

I think with the above strategy, one of the following has to change:

  • De-duplication in Company has to be disabled/defeated. If the desire is to mimic VSCode behavior, it would seem this option would do that.
  • The server has to be updated to support "labelDetails".
  • The server has to be updated to always send the detail even when the client supports obtaining it lazily.

@wyuenho
Copy link
Contributor Author

wyuenho commented Dec 21, 2024

@brownts This is clearly a bug for the language server as the items are missing filterText. Per the spec

        /**
	 * A string that should be used when filtering a set of
	 * completion items. When omitted the label is used as the
	 * filter text for this item.
	 */
	filterText?: string;

@brownts
Copy link
Contributor

brownts commented Dec 21, 2024

This is clearly a bug for the language server as the items are missing filterText.

Can you elaborate on why this is a bug? This field is optional, and defaults to the "label" when not present. Why would this be needed?

@wyuenho wyuenho force-pushed the do-not-resolve-during-annotate branch from 537f6c0 to 82cf8fc Compare December 21, 2024 17:20
@wyuenho
Copy link
Contributor Author

wyuenho commented Dec 21, 2024

Every field in CompletionItem except the label is optional, so if the server needs to distinguish overloads, it either has to return 1) a different label, 2) something else to distinguish the item, such as filterText, like how eclipse-jdtls does it, 3) dedup on the server, but return all the overload signatures in the documentation like how Pyright does it, 4) dedup on the server, but return a different signatureHelp as you type, like how typescript-language-server does it.

IMHO, Ada language server is just relying on an undocumented behavior in VS Code that's not in the spec.

@wyuenho wyuenho force-pushed the do-not-resolve-during-annotate branch from b68fc2c to b9a428f Compare December 21, 2024 18:18
@brownts
Copy link
Contributor

brownts commented Dec 21, 2024

Every field in CompletionItem except the label is optional, so if the server needs to distinguish overloads, it either has to return 1) a different label, 2) something else to distinguish the item, such as filterText, like how eclipse-jdtls does it, 3) dedup on the server, but return all the overload signatures in the documentation like how Pyright does it, 4) dedup on the server, but return a different signatureHelp as you type, like how typescript-language-server does it.

IMHO, Ada language server is just relying on an undocumented behavior in VS Code that's not in the spec.

The Ada language server does return the signature in the "detail" as can be seen in the VSCode documentation pop-up window, as well as in the "completionItem/resolve" response. The server is also returning unique snippet expansions for each signature (i.e., "insertText"), therefore you don't want this de-duped on the server or by Company.

I'd be surprised that I'm the only one running into this issue. I guess I'll just keep the advice on company-capf to not de-dup since that seems to be the result I'm looking for.

@wyuenho
Copy link
Contributor Author

wyuenho commented Dec 21, 2024

The Ada language server does return the signature in the "detail" as can be seen in the VSCode documentation pop-up window, as well as in the "completionItem/resolve" response.

I mean VS Code doesn't use the detail from textDocument/completion, only during resolution. In any case, eager resolution in any form isn't going to happen. VS Code doesn't do it, the spec doesn't demand it and it's bad in general for all servers.

Your issue has nothing to do with this issue or even lsp-mode before or after #4610 , I suggest you to file an issue on the Ada language server. The easiest way to dedup on the client is to send down filterText for overloads like everyone else does.

@wyuenho wyuenho force-pushed the do-not-resolve-during-annotate branch from b9a428f to 5afc19f Compare December 22, 2024 11:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants