-
Notifications
You must be signed in to change notification settings - Fork 817
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add streaming support #182
Conversation
Hi @felixfbecker, I'm your friendly neighborhood Microsoft Pull Request Bot (You can call me MSBOT). Thanks for your contribution! TTYL, MSBOT; |
See microsoft/vscode#20010 for an umbrella issue that I have created to track the associated implementation details. Specifically, these two PRs demonstrate an implementation of this proposal in VS Code |
ping @dbaeumer |
Conceptually, diagnostics are a stream. (The protocol currently lacks a way for the server to indicate "there are no further diagnostics left to report" but it should have one). Can you think of any way to unify diagnostics with streamed results? |
@ljw1004 could you explain the where you see the streaming in diagnostics? Diagnostics are reported per-file and clear previously reported diagnostics for that file. They are typically only reported when you open a file, not for all files, so there is usually no real known point at which all diagnostics for all files are reported. |
Why do you say that? In C# and Visual Studio, when you edit a file (e.g. changing the signature of a method), then it works through the entire project to find every single place that might be affected. It reports errors including for files that haven't yet been opened. When you work on the Roslyn codebase itself in Visual Studio (it's a fairly large C# project) then it sometimes takes up to a minute to figure out all the errors in other files. In more moderately-sized projects I commonly saw 10+ seconds to figure out all the errors in other files. For our own team, we also display errors throughout the entire project, but our project is even bigger so it takes a bit longer. |
Regarding the ping #182 (comment): given the priorities we currently have for VS Code I can't commit on giving reasonable feedback on this pull request in the near future. I agree that streaming support is a cool feature for the LSP but it requires quite some work to get this right especially for clients. May be we should start with a progress specification first which could mitigate some of the streaming use cases. Regarding markers: language servers can provide markers for all resources in a workspace. There is no restriction to only handle open resources. This is why diagnostics are actually 'stream based' on a resource level and are notification send from the server to the client and not request results. |
Note that Data Analysis server support streaming in a pretty general way, without using JSON patch, streaming parsers or other complicated tools. An example would be symbol search: https://htmlpreview.github.io/?https://github.com/dart-lang/sdk/blob/master/pkg/analysis_server/doc/api.html#request_search.findTopLevelDeclarations The way it works is that, for "streaming" requests, the server answers immediately with a Each notification usually contains the full set of items (so, newer ones overwrite older ones), but in a select cases (workspaceSymbol being one of these) the results from all notifications are concatenated. |
Please see also #786 |
Closing the issue. Corresponding support has been speced in dbaeumer/3.15 branch. |
@dbaeumer I'm looking in the dbaeumer/3.15 branch but I see nothing there about streaming - the only mention of "stream" is for the LSIF exporter, and there seems to be no mention of JSON-Patch or progress or partialResult. Am I just looking in the wrong place? |
@ljw1004 Look at the specification.md. It is achieved by supporting partial results. |
Ah, thanks @KamasamaK - I'd simply been looking in the wrong place. |
@ljw1004 I published the upcoming 3.15 spec on the Web/UI here: https://microsoft.github.io/language-server-protocol/specifications/specification-3-15/ The progress relevant sections are: https://microsoft.github.io/language-server-protocol/specifications/specification-3-15/#progress |
Are there any language servers implementing this feature? |
Someone? Or at least is there a language server planning to implement it anytime soon? |
This is a proposal for adding "streaming" support to LSP, more specifically: The use case of wanting to return parts of a result before the full result is finished (#141). This is especially desirable for methods that can return lots of results like
workspace/symbol
,textDocument/completion
and are therefor often expensive, even when the UI can only shows a subset of the full response anyway.There are a couple of approaches to this:
This is not really viable because
Content-Length
header, so the sender needs to know the full content in advancePolling results in unnecessary overhead. With the request the client expressed clear interest in the result, so the server should provide what it has as soon as possible.
Introducing a new method (
$/partialResult
) that the server calls on the client is therefor the best approach. This new method should be passed a request ID the server is returning partial results for and the partial data itself.The data structure used to represent the partial result should work for all methods. In the simplest case, that means adding items to a result array. That should also work if the array is nested inside an object - for example,
textDocument/completion
returns an object withisIncomplete
anditems
. The language server should be able to fill out theisIncomplete
as well as add new items to theitems
array with partial results. The language server should also have the ability to provide partial objects, then fill out missing properties later. For example, it should be able to provideLocation
s with just URIs initially orSymbolInformation
s with just the name, then fill out therange
later because it might be expensive to compute, not desirable to keep in memory and is not needed to show the user initially. The client could then already make use of the partial information.Just trying to "merge" multiple objects/arrays together will not suffice for this as there would be no way to address specific array indices or to append to an array.
In addition, using a well-defined standard would have the benefit of having libraries available in many languages that ease implementation.
This is solved by JSON Patch. A JSON Patch is nothing but an array of operations which modify a value at a certain path. In particular, it is possible to add more items to arrays as well as add properties to objects no matter how nested the value is inside the result.
The client would just continuously apply the operations to a result and then for example rerender the UI. The UI might have to be modified so it handles undefined values, for example by showing Loading... where it would normally show certain properties or filtering items that it cannot display because of incompleteness. It might also cancel the request after it has received the information it was looking for.
The final response indicates the result is complete. If the receiver opted into streaming through the
Server
/ClientCapability
and the server sent at least one partial result notification then the response result can just benull
and is ignored by the client. Applying all patches sent through$/partialResult
should yield to the exact same result as a not streamed response.JSON Patch has libraries for almost every language including JS, Python, PHP, Ruby, Perl, C, Java, C#, Go, Haskell and Erlang. This should make implementation easy for clients and servers.
Simple example:
We experimented with this proposal at Sourcegraph in two language servers and are working on a VS Code client implementation.