-
Notifications
You must be signed in to change notification settings - Fork 328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to support streaming responses? #161
Comments
Another option that I am now favoring is to not add a new method, but instead modify the return type to return a wrapper object that contains both the response promise as well as the request.
Then
Not backwards compatible, but seems nice in that it keeps ownership of the id at the jsonrpc layer (avoid duplicates). |
@nicksnyder I am not sure I understand exactly what you want to do. IMO there can only be exactly one response message for a request message. If we want streaming / paging then I think this must be implemented on top of request, notification and response. Can you make a more concrete example for what you would use this. |
Felix just made a proposal for LSP here: Based on that proposal, the requests would look like this (everything inside Request to language server. ID=4 is part of the JSONRPC envelope but hidden by vscode-jsonrpc.
Partial results sent by language server. The language server sends id=4 as an application parameter for these requests so the client can associate these with the above request.
Final result sent by language server
In short, to handle the incoming $/partialResult requests, the client (VS Code) needs to know what JSONRPC id was assigned the outbound textDocument/references request. This is not the only path forward. The original textDocument/references request could be modified to include its own id param (in addition to the id assigned by the JSONRPC envelope). I am hoping to have a PR later today that might clarify things. |
If we want to support something like this it should be handled by the JSON RPC library and should be 'hidden' from clients (e.g. not exposing how this works on the protocol level). So if someone receives a request on the server side there could be something like a progress token which can be used to report partial results. Same on the client send when sending a request. I would like to make this similar to cancelation tokens which hide the request ID from clients and servers as well. |
@dbaeumer Ah, thanks for the feedback. I wasn't sure if this logic belonged in vscode-jsonrpc or the client. Sounds like the former. I will work on this. Followup question: since the proposed streaming protocol is JSON PATCH, would you be ok with me adding a dependency like fast-json-client to vscode-jsonrpc? |
Another point to discuss is how the progress data should be propagated.
(z) is proposed to be JSON PATCH. Should (x) be a diff or an updated partial result on each tick? Option 1 If (x) is a partial result, meaning that each progress update contains the cumulative data of all progress updates, then vscode-jsonrpc can apply the JSON PATCH and just forward the partial result from there. This is simple and doesn't require any endpoint specific logic in vscode-jsonrpc or vscode-langaugeclient. The main thread might need to decide if the partial is worth re-rendering or not. The downside is that the intermediate results will end up serializing redundant data over (x) (i.e. the main thread would get ["a"], then ["a", "b"], then ["a", "b", "c"], etc.). Not sure how bad this would be. Option 2 Assume that (x) is a diff that only contains the newest data since the last progress update. Option 2a This diff could be represented as a JSON PATCH that is just forwarded all the way from vscode-jsonrpc to the main thread. Simple, but also exposes the wire format, which seems like an implementation detail that the main thread shouldn't have to care about. Option 2b This diff could be represented as a data model (e.g. Location[] for reference requests). Decouples the main thread from JSON PATCH, but also requires endpoint specific logic to support streaming. It is worth noting that endpoint specific logic already exists (i.e. translate between 0 and 1 based indexes for locations). @dbaeumer Thoughts? |
I think we best discuss this using a concrete example. Otherwise we implement something technical which might be cool but on an application layer useless. First question for me is: why would we need streaming? The only answer I can come up with right now is that the result set is huge and partial results can already be presented to the user. If this is the use case we need to answer a couple of questions:
If we go with paging then IMO there might not be anything to do on the json-rpc level. This would be pure application layer. Streaming on the other hand requires changes on the json-rpc level. A good example to see how this could work is workspace symbols or find all references. |
The first use case I am thinking of is streaming "find all references." Here are related issues/PRs:
This is more about latency than size (although the latter impacts the former). It may take a while for a language server to scan a large repo, but it is useful to show the user results as they are found.
In my proposed PR, any time the client receives more data, it resorts and re-renders while maintaining the user's state.
Streaming. A paging solution would require that the server know about relevance so the first page would be the "most relevant". Plus there may be multiple reference providers so sorting has to be done on the client anyway. In my proposal, language servers just stream references as they find them and the client is responsible for sorting/rendering. |
Let's continue general discussion of this on microsoft/vscode#20010 I will open a PR specifically for vscode-languageserver-node hopefully by tomorrow. |
#164 is the PR associated with this issue for the record Let's still continue the discussion on microsoft/vscode#20010 since this change impacts multiple components. |
Right now, the jsonrpc library (specifically MessageConnection.sendRequest) completely hides the fact that an id is assigned to each request (so that it can be paired with the response).
To support streaming responses, it will be necessary for application level logic to be able to associate inbound messages with a pending outbound message. The JSONRPC message id has exactly this purpose so it would be nice to expose it. Otherwise, the application will need to send its own (basically duplicate) id as a param.
One backwards compatible way to do this would be to add a new method to MessageConnection called
sendRequestMessage: <R, E>(requestMessage: RequestMessage, token?: CancellationToken): Thenable<R>
. The only downside would be potential message id conflicts between usages ofsendRequestMessage
andsendRequest
.Thoughts?
The text was updated successfully, but these errors were encountered: