Skip to content

Commit

Permalink
Update eviction strategy to include priority (#6949)
Browse files Browse the repository at this point in the history
  • Loading branch information
dashpole authored and k8s-ci-robot committed Jan 25, 2018
1 parent e1dbc1a commit de7b93a
Showing 1 changed file with 16 additions and 19 deletions.
35 changes: 16 additions & 19 deletions docs/tasks/administer-cluster/out-of-resource.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,25 +196,22 @@ If `nodefs` filesystem has met eviction thresholds, `kubelet` frees up disk spac

If the `kubelet` is unable to reclaim sufficient resource on the node, `kubelet` begins evicting Pods.

The `kubelet` ranks Pods for eviction first by their quality of service, and then by the consumption
of the starved compute resource relative to the Pods' scheduling requests.

As a result, `kubectl` ranks and evicts Pods in the following order:

* `BestEffort` Pods consume the most of the starved resource are failed first.
Local disk is a `BestEffort` resource.
* `Burstable` Pods consume the greatest amount of the starved resource
relative to their request for that resource are killed first. If no Pod
has exceeded its request, the strategy targets the largest consumer of the
starved resource.
* `Guaranteed` Pods are guaranteed only when requests and limits are specified
for all the containers and they are equal. A `Guaranteed` Pod is guaranteed to
never be evicted because of another Pod's resource consumption. If a system
daemon (such as `kubelet`, `docker`, and `journald`) is consuming more resources
than were reserved via `system-reserved` or `kube-reserved` allocations, and the
node only has `Guaranteed` Pods remaining, then the node must choose to evict a
`Guaranteed` Pod in order to preserve node stability and to limit the impact
of the unexpected consumption to other `Guaranteed` Pods.
The `kubelet` ranks Pods for eviction first by whether or not their usage of the starved resource exceeds requests,
then by [Priority](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/), and then by the consumption of the starved compute resource relative to the Pods' scheduling requests.

As a result, `kubelet` ranks and evicts Pods in the following order:

* `BestEffort` or `Burstable` Pods whose usage of a starved resource exceeds its request.
Such pods are ranked by Priority, and then usage above request.
* `Guaranteed` pods and `Burstable` pods whose usage is beneath requests are evicted last.
`Guaranteed` Pods are guaranteed only when requests and limits are specified for all
the containers and they are equal. Such pods are guaranteed to never be evicted because
of another Pod's resource consumption. If a system daemon (such as `kubelet`, `docker`,
and `journald`) is consuming more resources than were reserved via `system-reserved` or
`kube-reserved` allocations, and the node only has `Guaranteed` or `Burstable` Pods using
less than requests remaining, then the node must choose to evict such a Pod in order to
preserve node stability and to limit the impact of the unexpected consumption to other Pods.
In this case, it will choose to evict pods of Lowest Priority first.

If necessary, `kubelet` evicts Pods one at a time to reclaim disk when `DiskPressure`
is encountered. If the `kubelet` is responding to `inode` starvation, it reclaims
Expand Down

0 comments on commit de7b93a

Please sign in to comment.