Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check partial answers for ongoing requests #141

Closed
vladdu opened this issue Dec 8, 2016 · 10 comments
Closed

Check partial answers for ongoing requests #141

vladdu opened this issue Dec 8, 2016 · 10 comments
Labels
*duplicate Issue identified as a duplicate of another issue(s) feature-request Request for new features or functionality

Comments

@vladdu
Copy link
Contributor

vladdu commented Dec 8, 2016

For long-running requests, it makes sense to show partial results in the UI (for example when searching for all references in a workspace, but even for completion). Streaming them from the server is not a very good idea, but the client could send progress requests whenever it wants and receive the current results, updating the UI. Kind of like cancelRequest but without cancelling.

@felixfbecker
Copy link

I have made a proposal for progress in #70

@vladdu
Copy link
Contributor Author

vladdu commented Dec 27, 2016

This is more than progress report. I'd like to be able to get a list of partial results for an ongoing request. This should be something the client requests.

@kdvolder
Copy link

kdvolder commented Dec 28, 2016

Streaming them from the server is not a very good idea,

Why not? Personally I very much like the idea of having a fully async and incremental mechanic for sending completions (for example).

Having this allows for server to deliver any results as soon it has them rather than having to buffer them and make an awkward implementation decision around how much time is 'reasonable' to try and compute as many results as possible before returning an incomplete list. This often means you end up returning 0 completions even if the user was willing to wait a bit (or just stopped typing for a moment to reflect).

@vladdu
Copy link
Contributor Author

vladdu commented Dec 28, 2016

I think also that it would be neat with a streaming API, fully async. It has the downside is that it offers no flow control. Also, I believe that clients might have trouble handling it properly, thus making it less likely to get implemented. I started with a less ambitious suggestion, hoping to get it accepted easier.

I have absolutely no objection against a fully async protocol, when it is supported by both the client and the server. There will have to be a fallback solution for participants that need flow control or can't handle the async streams of events.

@dbaeumer dbaeumer added the feature-request Request for new features or functionality label Dec 29, 2016
@dbaeumer
Copy link
Member

With the new capability flag support in 3.0 I can imagine a flag 'supportsResultStreaming' on the completion capability to indicate that the client accept streamed results. Then the server can stream results.

@felixfbecker
Copy link

felixfbecker commented Dec 29, 2016

With the new capability flag support in 3.0 I can imagine a flag 'supportsResultStreaming' on the completion capability to indicate that the client accept streamed results. Then the server can stream results.

How exactly would that work? Streaming depends entirely on the client parsing the response as a JSON stream, for example if it is an array emitting each array item instantly and not waiting for the whole response to parse it completely.

EDIT: Okay, it doesn't work, because the server needs to send a Content-Length before-hand

@vladdu
Copy link
Contributor Author

vladdu commented Dec 29, 2016

Not necessarily, I think. The server could also send multiple answers to a request, with a flag saying if it's done or not. There should be a merging policy for the non-array values. This way, it works fine even if a response would contain two arrays that are computed in parallel (there isn't such a response now, but there could be).

@dbaeumer
Copy link
Member

Agree with @vladdu. This can be achieved using a paging support. That would even allow the client to request all results if a user picks a already sent one in the example of code complete.

@felixfbecker
Copy link

Partial results are possible with our streaming implementation: #182

@dbaeumer
Copy link
Member

Marking as dup of #110

@dbaeumer dbaeumer added the *duplicate Issue identified as a duplicate of another issue(s) label Nov 24, 2017
@vscodebot vscodebot bot locked and limited conversation to collaborators Jan 8, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
*duplicate Issue identified as a duplicate of another issue(s) feature-request Request for new features or functionality
Projects
None yet
Development

No branches or pull requests

4 participants