Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving internal benchmarks #3

Open
jdtsmith opened this issue Jan 5, 2024 · 2 comments
Open

Improving internal benchmarks #3

jdtsmith opened this issue Jan 5, 2024 · 2 comments

Comments

@jdtsmith
Copy link
Contributor

jdtsmith commented Jan 5, 2024

It would be nice if the emacs-lsp-booster process could optionally log some of its own benchmarks, e.g. if a --benchmark flag is passed, or with a special build flag (maybe the dev build?... not a rust person). The obvious things to log are:

  • Number of complete JSON payloads received from server thus far.
  • Bytes of data received by the lsp server in the last (numbered) JSON payload.
  • Bytes of bytecode data that JSON was translated into and sent to emacs.
  • Time it took from writing that data onto stdout until emacs reads it (not sure how to do this, maybe time until pipe become writeable again).

Then perhaps a --passthrough flag or build option can be implemented, which does nothing to the JSON, but just passes it on as-is (obviously removing the json-read-buffer override on the Emacs side). This would be just for equivalent logging.

Then we could generate two logs (with and without --passthrough). These two results could be compared for a given lsp server, to see how much time it takes (per byte) on average for emacs to read in the data via JSON vs. via bytecode.

@blahgeek
Copy link
Owner

blahgeek commented Jan 6, 2024

Thanks for your suggestions. I've made a few improvements since then.

Number of complete JSON payloads received from server thus far.
Bytes of data received by the lsp server in the last (numbered) JSON payload.
Bytes of bytecode data that JSON was translated into and sent to emacs.

Now there are simple loggings. d3b9c98. Run with --verbose flag would print those debug logs.

Then perhaps a --passthrough flag or build option can be implemented, which does nothing to the JSON, but just passes it on as-is (obviously removing the json-read-buffer override on the Emacs side). This would be just for equivalent logging.

As you already seen, now there's a --disable_bytecode flag.


Time it took from writing that data onto stdout until emacs reads it (not sure how to do this, maybe time until pipe become writeable again).

This, indeed, is not trivial to do. Also I don't think this would provide accurate benchmark result for comparison.

If we really want to do that, I guess we can (for both json data in "passthrough" mode, or bytecode data in normal mode):

  1. For each message from lsp server, mark the timestamp (t_server_generated) when this program receives it, and somehow attach the timestamp in the message when sending to emacs (e.g. as a new field in json object)
  2. In emacs, mark the timestamp (t_emacs_parsed) after receiving and parsing the message
  3. Calculate t_emacs_parsed - t_server_generated as the end-to-end latency

This would require modifications in both wrapper and elisp sides.

Personally I'm not that into this benchmark idea though. For example, the "emacs may be blocked while sending data" issue that this program solves cannot be easily measured. So I think the overall improvements cannot be benchmarked anyway.

@jdtsmith
Copy link
Contributor Author

jdtsmith commented Jan 6, 2024

Now there are simple loggings.

Thanks for these updates.

This, indeed, is not trivial to do.

Good thoughts. Re "emacs may be blocked while sending data", you could also do the reverse: tag t_emacs_generated in elisp, then have emacs-lsp-booster log t_server_received. Rather than trying to get the logs in the same stream, probably we could just associate each log with its request :id, and then the two logs (one from emacs, one from your wrapper) could be decoded after the fact, making sure the times are matched correctly.

Then users would use their client with and without --disable_bytecode for, say, a day each. By collecting the data, interesting trends would definitely emerge in terms of send and receive latency as a function of byte size of the payload. I'd guess for small payloads it may increase slightly, depending on the amount of "blocked I/O" which occurs. And it will definitely vary by server.

But I agree this is not really required. People can use the wrapper or not based on their experiences. The only advantage to having such benchmark data would be if you ever envisioned getting builtin support inside lsp-mode and eglot; to make that case, real usage benchmarks would likely be required. Maybe something for down the road.

Feel free to close.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants