Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(bandwidth_scheduler) - slightly increase base_bandwidth #12719

Merged
merged 1 commit into from
Jan 11, 2025

Conversation

jancionear
Copy link
Contributor

@jancionear jancionear commented Jan 10, 2025

Bandwidth scheduler always grants base_bandwidth on all links. base_bandwidth has to be low enough to allow granting enough bandwidth to send a max size receipt on one link without going over the max_shard_bandwidth limit.
Previously I described this requirement as:

num_shards * base_bandwidth + max_receipt_size <= max_shard_bandwidth

But this is actually incorrect. The inequality assumed that base_bandwidth is granted on every link, and then max_receipt_size of bandwidth is granted on top of that. This is not what actually happens, the bandwidth scheduler is smarter than that. It'll notice that one link already has base_bandwidth granted on it and increase the grant by max_receipt_size - base_bandwidth, not max_receipt_size.
This means that the proper inequality is:

(num_shards - 1) * base_bandwidth + max_receipt_size <= max_shard_bandwidth

An example might be helpful.
Let's say that there are 3 shards and shard 0 wants to send out a max size receipt.
At first there are no grants on any of the links coming out of shard 0:

0->0: 0
0->1: 0
0->2: 0

Then bandwidth scheduler grants base_bandwidth on all links:

0->0: base_bandwidth
0->1: base_bandwidth
0->2: base_bandwidth

Now bandwidth scheduler processes a bandwidth request to send a max size receipt on link 0->0. The previous inequality assumed that the final grant would be:

0->0: base_bandwidth + max_receipt_size
0->1: base_bandwidth
0->2: base_bandwidth

And shard 0 can't send out more than max_shard_bandwidth, so it must hold that 3*base_bandwidth + max_receipt_size <= max_shard_bandwidth

But in reality the final grants look like this:

0->0: max_receipt_size
0->1: base_bandwidth
0->2: base_bandwidth

So it must hold that 2*base_bandwidth + max_receipt_size <= max_shard_bandwidth.

This means that we can set base bandwidth to (max_shard_bandwidth - max_receipt_size) / (num_shards - 1), which gives a slightly larger value than before, allowing to send more receipts without having to make a bandwidth request.

@jancionear jancionear requested a review from a team as a code owner January 10, 2025 21:34
Copy link

codecov bot commented Jan 10, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 70.65%. Comparing base (1885afa) to head (6dbcde3).
Report is 4 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #12719      +/-   ##
==========================================
- Coverage   70.65%   70.65%   -0.01%     
==========================================
  Files         848      848              
  Lines      173788   173788              
  Branches   173788   173788              
==========================================
- Hits       122791   122789       -2     
- Misses      45869    45871       +2     
  Partials     5128     5128              
Flag Coverage Δ
backward-compatibility 0.16% <0.00%> (ø)
db-migration 0.16% <0.00%> (ø)
genesis-check 1.36% <0.00%> (ø)
linux 69.21% <100.00%> (-0.01%) ⬇️
linux-nightly 70.26% <100.00%> (+<0.01%) ⬆️
pytests 1.66% <0.00%> (ø)
sanity-checks 1.47% <0.00%> (ø)
unittests 70.48% <100.00%> (-0.01%) ⬇️
upgradability 0.20% <0.00%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@jancionear jancionear added this pull request to the merge queue Jan 11, 2025
Merged via the queue into master with commit 1dd1b0a Jan 11, 2025
28 checks passed
@jancionear jancionear deleted the jan_basejump branch January 11, 2025 07:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants