Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pods marked as Not Ready and with Insufficient Storage after scaling #386

Closed
koalalorenzo opened this issue May 24, 2018 · 4 comments
Closed

Comments

@koalalorenzo
Copy link

After scaling up a AKS cluster (version 1.9.6, created on the 11th of April 2018) we found the following problems:

  • The new nodes are marked as "Not Ready", as it seems something related to the CIDR
  • The pods living in the existing nodes gets restarted every ~1h
  • The existing nodes gets marked as Insufficient Storage.

Our VM size is Standard_D8s_V3, this cluster was initially created with terraform, but the scaling was instructed via az aks scale cmd.

This cluster, after a day, got stuck and all the deployments were not running anymore. We kept the cluster just in case we need to debug more deeply what happened. The nodes now are not reachable, but the master is still working.

@koalalorenzo
Copy link
Author

Probably related to #274 and #102

@sauryadas
Copy link
Contributor

@JunSun17 any idea?

@JunSun17
Copy link

@koalalorenzo @sauryadas I think this "Node not ready" is a known issue, can you create a ticket so oncall can take a look?

@jnoller
Copy link
Contributor

jnoller commented Apr 3, 2019

closing as stale

@jnoller jnoller closed this as completed Apr 3, 2019
@ghost ghost locked as resolved and limited conversation to collaborators Aug 7, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants