Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: Cluster deletion is skipped if timeout happens in WaitUntilClusterIsReady #227

Merged
merged 1 commit into from
Dec 18, 2024

Conversation

valaparthvi
Copy link
Collaborator

What does this PR do?

Whenever a test times out at WaitUntilClusterIsReady, it returns a nil cluster, due to which the cluster is not deleted. This PR should fix this.

------------------------------
• [FAILED] [1800.626 seconds]
P0Provisioning when a cluster is created [BeforeEach] should successfully provision the regional cluster & add, delete, scale nodepool
  [BeforeEach] /home/gh-runner/actions-runner/_work/hosted-providers-e2e/hosted-providers-e2e/hosted/gke/p0/p0_provisioning_test.go:68
  [It] /home/gh-runner/actions-runner/_work/hosted-providers-e2e/hosted-providers-e2e/hosted/gke/p0/p0_provisioning_test.go:96

  Captured StdOut/StdErr Output >>
  Skipping downstream cluster deletion:  auto-gke-hp-ci-hzloh
  << Captured StdOut/StdErr Output

  Timeline >>
  "level"=0 "msg"="Using K8s version 1.31.3-gke.1162000 for cluster auto-gke-hp-ci-hzloh"
  [FAILED] in [BeforeEach] - /home/gh-runner/actions-runner/_work/hosted-providers-e2e/hosted-providers-e2e/hosted/gke/p0/p0_provisioning_test.go:85 @ 12/16/24 09:24:19.455
  << Timeline

  [FAILED] Expected
      <*errors.errorString | 0xc0006a60d0>: 
      timeout waiting on condition
      ***
          s: "timeout waiting on condition",
      ***
  to be nil
  In [BeforeEach] at: /home/gh-runner/actions-runner/_work/hosted-providers-e2e/hosted-providers-e2e/hosted/gke/p0/p0_provisioning_test.go:85 @ 12/16/24 09:24:19.455

Which issue(s) this PR fixes (optional, in fixes #(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):

Fixes #

Checklist:

Special notes for your reviewer:

…erIsReady

Signed-off-by: Parthvi Vala <parthvi.vala@suse.com>
@valaparthvi valaparthvi marked this pull request as ready for review December 17, 2024 06:24
@valaparthvi
Copy link
Collaborator Author

Excerpt from https://github.com/rancher/hosted-providers-e2e/actions/runs/12350788310/job/34464410095.

The test case timed out at WaitUntilClusterIsReady, but the cluster deletion was not skipped.


S
------------------------------
• [FAILED] [1800.842 seconds]
P0Provisioning when a cluster is created [BeforeEach] should be able to upgrade k8s version of the zonal provisioned cluster
  [BeforeEach] /home/gh-runner/actions-runner/_work/hosted-providers-e2e/hosted-providers-e2e/hosted/gke/p0/p0_provisioning_test.go:68
  [It] /home/gh-runner/actions-runner/_work/hosted-providers-e2e/hosted-providers-e2e/hosted/gke/p0/p0_provisioning_test.go:96

  Timeline >>
  "level"=0 "msg"="Using K8s version 1.30.7-gke.1136000 for cluster auto-gke-hp-ci-chmuw"
  [FAILED] in [BeforeEach] - /home/gh-runner/actions-runner/_work/hosted-providers-e2e/hosted-providers-e2e/hosted/gke/p0/p0_provisioning_test.go:85 @ 12/16/24 11:11:33.938
  << Timeline

  [FAILED] Expected
      <*errors.errorString | 0xc000aa8070>: 
      timeout waiting on condition
      ***
          s: "timeout waiting on condition",
      ***
  to be nil
  In [BeforeEach] at: /home/gh-runner/actions-runner/_work/hosted-providers-e2e/hosted-providers-e2e/hosted/gke/p0/p0_provisioning_test.go:85 @ 12/16/24 11:11:33.938
------------------------------

@valaparthvi valaparthvi requested a review from thehejik December 17, 2024 06:25
@thehejik
Copy link
Collaborator

thehejik commented Dec 17, 2024

I thought this is somehow related to the existing GKE cluster, isn't it?

If so I guess it won't delete the cluster anyway but the change should not harm anything.

Copy link
Collaborator

@thehejik thehejik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@valaparthvi
Copy link
Collaborator Author

If so I guess it won't delete the cluster anyway but the change should not harm anything.

It will delete the cluster when it fails/timeout at WaitUntilClusterIsReady, which is true for the issue that we are facing.

@valaparthvi valaparthvi reopened this Dec 18, 2024
@valaparthvi valaparthvi merged commit df9e5e2 into main Dec 18, 2024
10 of 11 checks passed
@valaparthvi valaparthvi deleted the fix-cluster-deletion branch December 18, 2024 08:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants