Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 1850057: Use bfq scheduler on control plane, idle I/O for rpm-ostreed #1957

Merged
merged 2 commits into from
Aug 26, 2020

Conversation

cgwalters
Copy link
Member

@cgwalters cgwalters commented Jul 29, 2020

Part of solving #1897
A lot more details in https://hackmd.io/WeqiDWMAQP2sNtuPRul9QA

The TL;DR is that the bfq I/O scheduler better respects IO priorities,
and also does a better job of handling latency sensitive processes
like etcd versus bulk/background I/O .

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 29, 2020
@cgwalters
Copy link
Member Author

xref coreos/rpm-ostree#2164

@cgwalters cgwalters changed the title templates: Nice+IOSchedulingPriority for rpm-ostree on control plane Use bfq scheduler on control plane, idle I/O for rpm-ostreed Jul 30, 2020
@cgwalters cgwalters changed the title Use bfq scheduler on control plane, idle I/O for rpm-ostreed Bug 1852047: Use bfq scheduler on control plane, idle I/O for rpm-ostreed Aug 3, 2020
@openshift-ci-robot openshift-ci-robot added the bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. label Aug 3, 2020
@openshift-ci-robot
Copy link
Contributor

@cgwalters: This pull request references Bugzilla bug 1852047, which is valid. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.6.0) matches configured target release for branch (4.6.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

In response to this:

Bug 1852047: Use bfq scheduler on control plane, idle I/O for rpm-ostreed

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. label Aug 3, 2020
@sdodson
Copy link
Member

sdodson commented Aug 4, 2020

/cc @fabianofranz @hexfusion
FYI, I know you'd done extensive looks into I/O scheduler and its effects on etcd performance

@cgwalters
Copy link
Member Author

I rolled in #1962 to this PR because I really want to be able to see the effect of the change by looking at events.

@cgwalters cgwalters force-pushed the rpm-ostree-nice branch 2 times, most recently from 566f5bc to d6e210c Compare August 7, 2020 18:17
@cgwalters
Copy link
Member Author

Rebased 🏄‍♂️

@cgwalters
Copy link
Member Author

OK looking at some Prometheus queries mentioned in #1897 - this does seem to help. I see lower p99 and p999 spikes in etcd fsync latency, as expected. Staging an update took maybe 10s before, would now become 30s or more, but that's obviously totally fine.

Before
After

I've mostly done investigation with the larger PR that also prefers etcd followers, but I haven't done a huge amount of investigation as to how much of a difference that makes. I lean towards getting this one in at least relatively soon and get more data from all of the PR jobs and periodics.

@cgwalters
Copy link
Member Author

cgwalters commented Aug 7, 2020

Though, most test runs won't produce interesting data around this until we change the default e2e-gcp-upgrade test to synthesize a nontrivial OS update.

@hexfusion
Copy link
Contributor

I did a very basic test of bfq that was inconclusive. One thing with ionice I was under the impression was it was only honored with CFQ scheduler per manpage. But maybe I am missing something.

cc @cgwalters

NOTES
Linux supports I/O scheduling priorities and classes since 2.6.13 with the CFQ I/O
scheduler.

@ironcladlou
Copy link
Contributor

@cgwalters's testing seems to be making the claim that with --per-object-fsync (ostreedev/ostree#2152) the priorities are being respected with bfq — is that an accurate statement? It doesn't make sense to me why that would be true, and I'd like to understand.

@cgwalters
Copy link
Member Author

I did a very basic test of bfq that was inconclusive. One thing with ionice I was under the impression was it was only honored with CFQ scheduler per manpage. But maybe I am missing something.

The man page is out of date, see the BFQ docs which talk about using this, and see also the actual code: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/block/bfq-iosched.c#n4988

@cgwalters
Copy link
Member Author

Previously I'd been trying to add information in the PRs but I found I really wanted to write scripts and track nontrivial data there.

I ended up creating a "workboard" repo which has a subdirectory for this: https://github.com/cgwalters/workboard/tree/master/openshift/bz1850057-etcd-osupdate

I'm increasingly confident that this is a good fix - comparing some metrics before after proves out that we are seeing much better p999 latency for example. Please see the bottom half of this file: https://github.com/cgwalters/workboard/blob/master/openshift/bz1850057-etcd-osupdate/bz185007.md
for links to release images that you can use for your own tests, as well as Prow jobs that have run you can inspect.

@runcom
Copy link
Member

runcom commented Aug 25, 2020

@sinnykumari @yuqi-zhang @kikisdeliveryservice @ericavonb PTAL (and the linked internal Slack thread)

Part of solving openshift#1897
A lot more details in https://hackmd.io/WeqiDWMAQP2sNtuPRul9QA

The TL;DR is that the `bfq` I/O scheduler better respects IO priorities,
and also does a better job of handling latency sensitive processes
like `etcd` versus bulk/background I/O .
We switched rpm-ostree to do this when applying updates, but
it also makes sense to do when extracting the oscontainer.

Part of: openshift#1897
Which is about staging OS updates more nicely when etcd is running.
@cgwalters
Copy link
Member Author

OK rebased this, dropping out #1962 since it's not strictly necessary even though I'd really like to get that one in too.

I think we have good enough results to ship this!

@cgwalters cgwalters changed the title Bug 1852047: Use bfq scheduler on control plane, idle I/O for rpm-ostreed Bug 1850057: Use bfq scheduler on control plane, idle I/O for rpm-ostreed Aug 25, 2020
@openshift-ci-robot
Copy link
Contributor

@cgwalters: This pull request references Bugzilla bug 1850057, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.6.0) matches configured target release for branch (4.6.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

In response to this:

Bug 1850057: Use bfq scheduler on control plane, idle I/O for rpm-ostreed

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@cgwalters
Copy link
Member Author

We seem to have used a BZ clone originally accidentally, switching to use the main one.

@cgwalters
Copy link
Member Author

https://github.com/cgwalters/workboard/tree/master/openshift/bz1850057-etcd-osupdate is now updated with some screenshots as well as links to more Prow jobs you can use for your own inspection (and to launch clusters). For example:

@cgwalters
Copy link
Member Author

What's interesting is the before/after between OSUpdateStarted and OSUpdateStaged is still about 30s in both cases; we're not making things slower. So the problem seems to more be the pattern of how we were invoking fsync before causing a latency spike.

(That said I do think the effect here could be much more pronounced in clusters that are running a real workload too)

@ashcrow
Copy link
Member

ashcrow commented Aug 25, 2020

/test e2e-gcp-upgrade

@sdodson
Copy link
Member

sdodson commented Aug 25, 2020

(That said I do think the effect here could be much more pronounced in clusters that are running a real workload too)

If we feel that's critical we could work with the perf-scale team to replicate their 250 node 4000 namespace + $bignum pod workload upgrade tests where we could readily reproduce this problem. level of effort for that is measured in days though.

// updateOstreeObjectSync enables "per-object-fsync" which helps avoid
// latency spikes for etcd; see https://github.com/ostreedev/ostree/pull/2152
func updateOstreeObjectSync() error {
if err := exec.Command("ostree", "--repo=/sysroot/ostree/repo", "config", "set", "core.per-object-fsync", "true").Run(); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: do you think ionice or the above has the most dramatic effect on reducing latency?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you think it's important enough I can try to make new release images with just this change and not the bfq default. But to answer your question, both do matter, see:

https://github.com/cgwalters/workboard/blob/master/openshift/bz1850057-etcd-osupdate/ionice.md#concurrent-updates-with-none-scheduler

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's important enough to change anything I am just curious. You have done enough work :)

@cgwalters
Copy link
Member Author

If we feel that's critical we could work with the perf-scale team to replicate their 250 node 4000 namespace + $bignum pod workload upgrade tests where we could readily reproduce this problem. level of effort for that is measured in days though.

I don't think we actually need 250 real nodes, just a simulated workload hitting the apiserver/etcd right?

@hexfusion
Copy link
Contributor

hexfusion commented Aug 25, 2020

this is amazing work, thank you very much for the high level of effort and professionalism in seeing this through for 4.6.

LGTM!

@cgwalters
Copy link
Member Author

All green here on the core jobs, just needs a lgtm.

// and other processes. See
// https://github.com/openshift/machine-config-operator/issues/1897
// Note this is the current systemd default in Fedora, but not RHEL8,
// except for NVMe devices.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So why wouldn't we do this for all root devices on all nodes in the cluster?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly to limit the "blast radius" of this PR - changing the control plane only minimizes risk. But, it does also mean more divergence.

It probably does make sense to do an across-the-board switch though...maybe as a followup to this? Or would you rather do it in one go?

Copy link
Member Author

@cgwalters cgwalters Aug 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The other aspect is that currently the OS update vs etcd happens because we're updating the OS while etcd is still running - that only matters on the control plane. For regular worker nodes that don't have static pods, everything will have been drained before we start the update - so there's less need for bfq inherently.

(Long term I think we really want cgroups v2 + a filesystem that honors IO priorities - it's the only way to get real balance, bfq is mostly heuristics + support for baseline io priorities)

Copy link

@zvonkok zvonkok Aug 26, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sinnykumari @runcom @cgwalters @smarterclayton I would suggest to sync up with the perf-dept, there were some concerns regarding BFQ as the default scheduler in RHEL. There are several workloads where BFQ did not perform the best. MQ-deadline did better. There are probably more workloads that need to be strobed.

Even though BFQ is great on the low end, past experience shows it doesn't do as well on the high end. It would need to be thoroughly tested if RHCOS considered changing it. See http://post-office.corp.redhat.com/archives/kscale-list/2019-September/msg00010.html

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The BFQ may work great for the control-plane where we're only running "one" workload (etcd), on the worker nodes we have a variety of workloads that may or not benefit from BFQ. Would also be interesting to see how the infra pods are reacting to such a change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for raising the concern. This PR uses BFQ only for control plane nodes. We will keep in mind and make sure to talk to perf team if we plan to use BFQ for worker/custom pools.

Copy link
Contributor

@sinnykumari sinnykumari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really amazing work, thanks Colin for doing such a deep analysis.
It looks good to me and considering all other positive feedback let's get this in!

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Aug 26, 2020
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: cgwalters, sinnykumari

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [cgwalters,sinnykumari]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

2 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Aug 26, 2020

@cgwalters: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-aws-scaleup-rhel7 d6e210c0714a390fe22eee57a9361b7c38413acf link /test e2e-aws-scaleup-rhel7
ci/prow/e2e-ovn-step-registry 45b599e link /test e2e-ovn-step-registry
ci/prow/okd-e2e-aws 45b599e link /test okd-e2e-aws

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit 55110c2 into openshift:master Aug 26, 2020
@openshift-ci-robot
Copy link
Contributor

@cgwalters: Some pull requests linked via external trackers have merged:

The following pull requests linked via external trackers have not merged:

These pull request must merge or be unlinked from the Bugzilla bug in order for it to move to the next state.

Bugzilla bug 1850057 has not been moved to the MODIFIED state.

In response to this:

Bug 1850057: Use bfq scheduler on control plane, idle I/O for rpm-ostreed

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@fabianofranz
Copy link
Member

+1 this is a big win, thanks @cgwalters!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.