You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After scaling up a AKS cluster (version 1.9.6, created on the 11th of April 2018) we found the following problems:
The new nodes are marked as "Not Ready", as it seems something related to the CIDR
The pods living in the existing nodes gets restarted every ~1h
The existing nodes gets marked as Insufficient Storage.
Our VM size is Standard_D8s_V3, this cluster was initially created with terraform, but the scaling was instructed via az aks scale cmd.
This cluster, after a day, got stuck and all the deployments were not running anymore. We kept the cluster just in case we need to debug more deeply what happened. The nodes now are not reachable, but the master is still working.
The text was updated successfully, but these errors were encountered:
After scaling up a AKS cluster (version 1.9.6, created on the 11th of April 2018) we found the following problems:
Our VM size is
Standard_D8s_V3
, this cluster was initially created with terraform, but the scaling was instructed viaaz aks scale
cmd.This cluster, after a day, got stuck and all the deployments were not running anymore. We kept the cluster just in case we need to debug more deeply what happened. The nodes now are not reachable, but the master is still working.
The text was updated successfully, but these errors were encountered: