Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable incrementally updating diagnostics #1804

Open
keyboardDrummer opened this issue Sep 8, 2023 · 3 comments
Open

Enable incrementally updating diagnostics #1804

keyboardDrummer opened this issue Sep 8, 2023 · 3 comments
Labels
diagnostics feature-request Request for new features or functionality
Milestone

Comments

@keyboardDrummer
Copy link

For large files I'm worried that sending information diagnostics will slow down editing, since the amount of information diagnostics scales with the size of the file, and there can be many edits that trigger sending new diagnostics.

Have you considered adding a reset/append enum to the PublishDiagnostics notification, or something more complicated, so it also becomes possible to append new diagnostics without resetting the existing ones?

@dbaeumer
Copy link
Member

The pull diagnostics method as an unchanged property to signal that issue for a document have not changed. Incremental updating does not exist right now.

@dbaeumer dbaeumer added diagnostics feature-request Request for new features or functionality labels Sep 10, 2023
@dbaeumer dbaeumer added this to the Backlog milestone Sep 10, 2023
@davidhalter
Copy link

@keyboardDrummer Can you give us a more detailed overview what you think is a "large file" and awhat a lot of diagnostics would be?

Because in my experience the reason why things get slower is not the fact that a 100-1000 diagnostics are sent from the server to the client, but the fact that the server is busy parsing and doing type inference (which is probably 100 times more time intensive than sending the data).

@keyboardDrummer
Copy link
Author

keyboardDrummer commented Sep 11, 2023

Consumers of our VSCode extension (for Dafny) have files that are thousands of lines, and information diagnostics are likely to occur on every line. An information diagnostic per line may seem like a lot, but our language has a relatively high level of implicitness/inference (unrelated to types), so it can help to tell the user what the compiler has inferred from their code.

Sending a 1000 diagnostics for a single file might be OK, but we have the additional situation that we may send many publishDocument notifications for a single document change.

The reason for this is that our back-end has one fast phase (parsing/name resolution/type inference) that takes about a 100ms, but then does a complicated analysis (called verification) on separate program elements, of which a large file may have hundreds, that each can take between 100 and 1000 ms. Automatically running verification is in some ways very similar to automatically running unit tests. We publish diagnostics after the fast phase, but also each time verification of any element completes, so we may send a 100 publishDocument notifications.

What I'm worried about is sending 1000 (#lines * diagnostics/line) * 100 (amount of publishDocument calls) of elements per document change. I'll admit we are not evidently running into this at the moment, but it seems like our current design won't scale towards ever growing customers codebases.

I will say though we've extended LSP with a notification API to notify the client about the current progress of verification, so for each verifiable element to say whether it's: not being run/queued/running/completed. We could send additional diagnostics through this API, and use publishDocument only for the fast phase parsing/name resolution/type inference) diagnostics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
diagnostics feature-request Request for new features or functionality
Projects
None yet
Development

No branches or pull requests

3 participants