Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[proxytest] Wait all requests to finish before closing the server #5950

Merged
merged 4 commits into from
Nov 7, 2024

Conversation

AndersonQ
Copy link
Member

What does this PR do?

Makes the proxytest to wait all requests to finish before closing the underlying HTTP server

Why is it important?

The request log sometimes happens after the tests has finished, what causes the test to panic

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • [ ] I have made corresponding changes to the documentation
  • [ ] I have made corresponding change to the default configuration files
  • [ ] I have added tests that prove my fix is effective or that my feature works
  • [ ] I have added an entry in ./changelog/fragments using the changelog tool
  • [ ] I have added an integration test or an E2E test

Disruptive User Impact

How to test this PR locally

go test -count 15000 -timeout 0 -run TestHTTPSProxy ./testing/proxytest/

Related issues

Questions to ask yourself

  • How are we going to support this in production?
  • How are we going to measure its adoption?
  • How are we going to debug this?
  • What are the metrics I should take care of?
  • ...

@AndersonQ AndersonQ added skip-changelog backport-8.x Automated backport to the 8.x branch with mergify backport-8.16 Automated backport with mergify labels Nov 6, 2024
@AndersonQ AndersonQ self-assigned this Nov 6, 2024
@AndersonQ AndersonQ requested a review from a team as a code owner November 6, 2024 07:33
@@ -176,6 +178,9 @@ func New(t *testing.T, optns ...Option) *Proxy {

p.Server = httptest.NewUnstartedServer(
http.HandlerFunc(func(ww http.ResponseWriter, r *http.Request) {
p.requestsWG.Add(1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a maybe stupid question, but my understanding of it is that we increment the waitgroup on each request. Don't we know how many request we are gonna perform on this test so we can add them beforehand and leave only requestsWG.Done() inside the handler func?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, we don't. The proxy is used on tests and each test might vary, or even not be possible to know for sure how many requests will go through the proxy

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just thinking out loud here; 🙂 how do we cover for the scenario where there are still incoming requests but for some reason one of them manages to call Done() on the syncgroup and cause the Wait() in Close() to exit while other incoming ones haven't executed the Add() on the syncgroup yet?!

Copy link
Member Author

@AndersonQ AndersonQ Nov 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moving the close before the wait is effectively the same code as before though right?! I mean from the description of the issue "Makes the proxytest to wait all requests to finish before closing the underlying HTTP server" I understand that the reason behind the issue is that we call p.Server.Close() before all requests have been done, right?!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, the point is that p.Server.Close() should wait all requests to finish, then return.
The last request log should happen before the server closes, but, as we see in the test failing, some how it isn't the case.
That's why I added another barrier to try to force the request log to be logged before the test ends.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't see how p.Server.Close() waits for all requests to finish especially now that it is before p.requestsWG.Wait(); p.Close() will wait probably for some requests to finish until a syncgroup.Done() inside the HandlerFunc manages to lower the syncgroup enough to cause the p.requestsWG.Wait() to exit. 🙂

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right, the wait has to come before.
The main idea here is to try to delay a bit the end of the test, just enough for the last request log to be logged before the test end.

That's why it isn't a huge problem if a request slips over the wait group.

Copy link
Contributor

@pkoutsovasilis pkoutsovasilis Nov 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am far from an expert in this particular piece of code, but the case that we have here, reminds me a lot some other rare cases where we need close a channel on the readers end without knowing if the writers are done writing to it. Usually we treat such cases with first shutting down the writers (here should be the components that make requests to the proxy) then, if we have no shutdown confirmation from the writers, we wait as much as we think the writers should take to shutdown (not ideal) and then we close the channel (the proxy server in this case). Would a similar approach work here?! could we at each test "shutdown" the components that make requests to the proxy first and then close the proxy server?

@pierrehilbert pierrehilbert added the Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team label Nov 6, 2024
@elasticmachine
Copy link
Contributor

Pinging @elastic/elastic-agent-control-plane (Team:Elastic-Agent-Control-Plane)

@michalpristas michalpristas added the bug Something isn't working label Nov 6, 2024
@AndersonQ AndersonQ added flaky-test Unstable or unreliable test cases. and removed bug Something isn't working labels Nov 6, 2024
Copy link
Member

@pchila pchila left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we are only interested in making sure that the server completes any outstanding requests before we complete the test I can see multiple options:

  • create a serverch chan struct{} and close it after server.Close() has completed while blocking with a <-serverch so that we know that the shutdown has completed
  • uninstall the agent in each testcase and then close the server: in that case the only client of the server has disappeared so closing it should be safe (even if some synchronization would be better
  • close the server in the test (blocking) and set it to nil and try to close it also in a defer/Cleanup() if not nil (only for tests that will panic/not complete for some reason)
  • have each testcase start and stop its own proxy server(s) in a synchronized manner to reduce the lifetime of the servers and let each test decide when to stop it

@pkoutsovasilis
Copy link
Contributor

pkoutsovasilis commented Nov 6, 2024

Thanks for chiming in @pchila 🙂 @AndersonQ do you want to consider if any of the proposals here is easily applicable without causing heavy changes? Or maybe if the current syncgroup approach makes the CI green again, you want for now to merge this one and investigate a different approach as a follow up? 🙂

@AndersonQ
Copy link
Member Author

as this is a flaky test, I'd rather merge it and see if it fixes the issue. If don't we might move for another approach or even remove the request log at all

@AndersonQ
Copy link
Member Author

so far CI has been pretty good at triggering it, so we should know soon if this fixes the issue or not

Copy link

Quality Gate passed Quality Gate passed

Issues
0 New issues
0 Fixed issues
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
No data about Duplication

See analysis details on SonarQube

Copy link
Contributor

@pkoutsovasilis pkoutsovasilis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw that the unit-tests in CI are back to green with this PR, so 👍 That said, in the future we should try to implement a different solution to prevent in full consciousness the same hiccup from happening 🙂

@AndersonQ AndersonQ merged commit e423d73 into elastic:main Nov 7, 2024
14 checks passed
@AndersonQ AndersonQ deleted the proxytest-fix-race-with-test-end branch November 7, 2024 14:20
mergify bot pushed a commit that referenced this pull request Nov 7, 2024
mergify bot pushed a commit that referenced this pull request Nov 7, 2024
)

(cherry picked from commit e423d73)

# Conflicts:
#	testing/proxytest/proxytest.go
AndersonQ added a commit that referenced this pull request Nov 7, 2024
) (#5971)

(cherry picked from commit e423d73)

Co-authored-by: Anderson Queiroz <anderson.queiroz@elastic.co>
AndersonQ added a commit to AndersonQ/elastic-agent that referenced this pull request Nov 22, 2024
AndersonQ added a commit that referenced this pull request Nov 22, 2024


(cherry picked from commit e423d73)

---------

Co-authored-by: Anderson Queiroz <anderson.queiroz@elastic.co>
AndersonQ added a commit that referenced this pull request Dec 2, 2024
… closing the server (#5970)

* (backport #5878) proxytest: fix log after test finished (#5878)

(cherry picked from commit 02fb75e)

---------

Co-authored-by: Anderson Queiroz <anderson.queiroz@elastic.co>

* (backport #5950) [proxytest] Await requests before server shutdown #5970

(cherry picked from commit e423d73)

---------

Co-authored-by: Anderson Queiroz <anderson.queiroz@elastic.co>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport-8.x Automated backport to the 8.x branch with mergify backport-8.16 Automated backport with mergify flaky-test Unstable or unreliable test cases. skip-changelog Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Flaky Test]: TestHTTPSProxy – data race
6 participants