Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update eviction strategy to include priority #6949

Merged
merged 1 commit into from
Jan 25, 2018
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 16 additions & 19 deletions docs/tasks/administer-cluster/out-of-resource.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,25 +196,22 @@ If `nodefs` filesystem has met eviction thresholds, `kubelet` frees up disk spac

If the `kubelet` is unable to reclaim sufficient resource on the node, `kubelet` begins evicting Pods.

The `kubelet` ranks Pods for eviction first by their quality of service, and then by the consumption
of the starved compute resource relative to the Pods' scheduling requests.

As a result, `kubectl` ranks and evicts Pods in the following order:

* `BestEffort` Pods consume the most of the starved resource are failed first.
Local disk is a `BestEffort` resource.
* `Burstable` Pods consume the greatest amount of the starved resource
relative to their request for that resource are killed first. If no Pod
has exceeded its request, the strategy targets the largest consumer of the
starved resource.
* `Guaranteed` Pods are guaranteed only when requests and limits are specified
for all the containers and they are equal. A `Guaranteed` Pod is guaranteed to
never be evicted because of another Pod's resource consumption. If a system
daemon (such as `kubelet`, `docker`, and `journald`) is consuming more resources
than were reserved via `system-reserved` or `kube-reserved` allocations, and the
node only has `Guaranteed` Pods remaining, then the node must choose to evict a
`Guaranteed` Pod in order to preserve node stability and to limit the impact
of the unexpected consumption to other `Guaranteed` Pods.
The `kubelet` ranks Pods for eviction first by whether or not their usage of the starved resource exceeds requests,
then by [Priority](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/), and then by the consumption of the starved compute resource relative to the Pods' scheduling requests.

As a result, `kubelet` ranks and evicts Pods in the following order:

* `BestEffort` or `Burstable` Pods whose usage of a starved resource exceeds its request.
Such pods are ranked by Priority, and then usage above request.
* `Guaranteed` pods and `Burstable` pods whose usage is beneath requests are evicted last.
`Guaranteed` Pods are guaranteed only when requests and limits are specified for all
the containers and they are equal. Such pods are guaranteed to never be evicted because
of another Pod's resource consumption. If a system daemon (such as `kubelet`, `docker`,
and `journald`) is consuming more resources than were reserved via `system-reserved` or
`kube-reserved` allocations, and the node only has `Guaranteed` or `Burstable` Pods using
less than requests remaining, then the node must choose to evict such a Pod in order to
preserve node stability and to limit the impact of the unexpected consumption to other Pods.
In this case, it will choose to evict pods of Lowest Priority first.

If necessary, `kubelet` evicts Pods one at a time to reclaim disk when `DiskPressure`
is encountered. If the `kubelet` is responding to `inode` starvation, it reclaims
Expand Down