-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[proxytest] Wait all requests to finish before closing the server #5950
[proxytest] Wait all requests to finish before closing the server #5950
Conversation
@@ -176,6 +178,9 @@ func New(t *testing.T, optns ...Option) *Proxy { | |||
|
|||
p.Server = httptest.NewUnstartedServer( | |||
http.HandlerFunc(func(ww http.ResponseWriter, r *http.Request) { | |||
p.requestsWG.Add(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a maybe stupid question, but my understanding of it is that we increment the waitgroup on each request. Don't we know how many request we are gonna perform on this test so we can add them beforehand and leave only requestsWG.Done()
inside the handler func?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, we don't. The proxy is used on tests and each test might vary, or even not be possible to know for sure how many requests will go through the proxy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just thinking out loud here; 🙂 how do we cover for the scenario where there are still incoming requests but for some reason one of them manages to call Done() on the syncgroup and cause the Wait() in Close() to exit while other incoming ones haven't executed the Add() on the syncgroup yet?!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated it to close the server first: https://github.com/elastic/elastic-agent/pull/5950/files#diff-a0f5baf4d9cd4570b51a51302d316d3628ef5833bae89a0819aa1abfce8ce2deR254
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
moving the close before the wait is effectively the same code as before though right?! I mean from the description of the issue "Makes the proxytest to wait all requests to finish before closing the underlying HTTP server" I understand that the reason behind the issue is that we call p.Server.Close()
before all requests have been done, right?!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, the point is that p.Server.Close()
should wait all requests to finish, then return.
The last request log should happen before the server closes, but, as we see in the test failing, some how it isn't the case.
That's why I added another barrier to try to force the request log to be logged before the test ends.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't see how p.Server.Close()
waits for all requests to finish especially now that it is before p.requestsWG.Wait()
; p.Close()
will wait probably for some requests to finish until a syncgroup.Done()
inside the HandlerFunc manages to lower the syncgroup enough to cause the p.requestsWG.Wait()
to exit. 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's right, the wait has to come before.
The main idea here is to try to delay a bit the end of the test, just enough for the last request log to be logged before the test end.
That's why it isn't a huge problem if a request slips over the wait group.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am far from an expert in this particular piece of code, but the case that we have here, reminds me a lot some other rare cases where we need close a channel on the readers end without knowing if the writers are done writing to it. Usually we treat such cases with first shutting down the writers (here should be the components that make requests to the proxy) then, if we have no shutdown confirmation from the writers, we wait as much as we think the writers should take to shutdown (not ideal) and then we close the channel (the proxy server in this case). Would a similar approach work here?! could we at each test "shutdown" the components that make requests to the proxy first and then close the proxy server?
Pinging @elastic/elastic-agent-control-plane (Team:Elastic-Agent-Control-Plane) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we are only interested in making sure that the server completes any outstanding requests before we complete the test I can see multiple options:
- create a
serverch chan struct{}
and close it after server.Close() has completed while blocking with a<-serverch
so that we know that the shutdown has completed - uninstall the agent in each testcase and then close the server: in that case the only client of the server has disappeared so closing it should be safe (even if some synchronization would be better
- close the server in the test (blocking) and set it to nil and try to close it also in a
defer/Cleanup()
if not nil (only for tests that will panic/not complete for some reason) - have each testcase start and stop its own proxy server(s) in a synchronized manner to reduce the lifetime of the servers and let each test decide when to stop it
Thanks for chiming in @pchila 🙂 @AndersonQ do you want to consider if any of the proposals here is easily applicable without causing heavy changes? Or maybe if the current syncgroup approach makes the CI green again, you want for now to merge this one and investigate a different approach as a follow up? 🙂 |
as this is a flaky test, I'd rather merge it and see if it fixes the issue. If don't we might move for another approach or even remove the request log at all |
so far CI has been pretty good at triggering it, so we should know soon if this fixes the issue or not |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I saw that the unit-tests in CI are back to green with this PR, so 👍 That said, in the future we should try to implement a different solution to prevent in full consciousness the same hiccup from happening 🙂
) (cherry picked from commit e423d73)
) (cherry picked from commit e423d73) # Conflicts: # testing/proxytest/proxytest.go
… closing the server (#5970) * (backport #5878) proxytest: fix log after test finished (#5878) (cherry picked from commit 02fb75e) --------- Co-authored-by: Anderson Queiroz <anderson.queiroz@elastic.co> * (backport #5950) [proxytest] Await requests before server shutdown #5970 (cherry picked from commit e423d73) --------- Co-authored-by: Anderson Queiroz <anderson.queiroz@elastic.co>
What does this PR do?
Makes the proxytest to wait all requests to finish before closing the underlying HTTP server
Why is it important?
The request log sometimes happens after the tests has finished, what causes the test to panic
Checklist
[ ] I have made corresponding changes to the documentation[ ] I have made corresponding change to the default configuration files[ ] I have added tests that prove my fix is effective or that my feature works[ ] I have added an entry in./changelog/fragments
using the changelog tool[ ] I have added an integration test or an E2E testDisruptive User Impact
How to test this PR locally
Related issues
Questions to ask yourself