Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pod annotation bandwidth doesn't match actual transfer limit #2280

Closed
ceastman-r7 opened this issue Feb 22, 2023 · 3 comments
Closed

pod annotation bandwidth doesn't match actual transfer limit #2280

ceastman-r7 opened this issue Feb 22, 2023 · 3 comments
Labels

Comments

@ceastman-r7
Copy link

What happened:
I set the following pod annotation:
kubernetes.io/ingress-bandwidth: 1M
kubernetes.io/egress-bandwidth: 1M

but when I run wget to get a remote file the transfer rate is not limited to 1Mbps it appears to be a 1/10th of that roughly ~117KB/s

If I change the annotation to be 10M then the transfer appears to be ~ 1.14MB/s

Attach logs
laptop:
Wed Feb 22 12:15:21 CST 2023 - 1.31G 10.6MB/s in 2m 18s
Wed Feb 22 12:21:56 CST 2023 - 1.31G 9.79MB/s in 2m 35s
Wed Feb 22 12:27:21 CST 2023 - 1.31G 10.4MB/s in 2m 17s

istio-maintenance-deployment - no limit
Wed Feb 22 18:17:57 UTC 2023 - 1.31G 34.8MB/s in 40s

istio-maintenance-deployment - 1M limit
Wed Feb 22 18:21:55 UTC 2023 - 49.74M 117KB/s eta 1h 58m

istio-maintenance-deployment - 10M limit
Wed Feb 22 18:27:26 UTC 2023 - 1.31G 1.14MB/s in 17m 5s

What you expected to happen:
I expect that if the bandwidth is set to 1M then the transfer rate should be close to 1Mbps

How to reproduce it (as minimally and precisely as possible):

date ; wget https://releases.ubuntu.com/20.04.5/ubuntu-20.04.5-live-server-amd64.iso

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.17", GitCommit:"a7736eaf34d823d7652415337ac0ad06db9167fc", GitTreeState:"clean", BuildDate:"2022-12-08T11:41:04Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"darwin/amd64"}

Server Version: version.Info{Major:"1", Minor:"24+", GitVersion:"v1.24.8-eks-ffeb93d", GitCommit:"abb98ec0631dfe573ec5eae40dc48fd8f2017424", GitTreeState:"clean", BuildDate:"2022-11-29T18:45:03Z", GoVersion:"go1.18.8", Compiler:"gc", Platform:"linux/amd64"}

  • CNI Version: [Amazon VPC CNI] - v1.12.0-eksbuild.1

  • OS (e.g: cat /etc/os-release):
    VERSION="20.04.5 LTS (Focal Fossa)"

  • Kernel (e.g. uname -a):
    Linux istio-maintenance-deployment-75c678f565-9fqgr 5.4.226-129.415.amzn2.x86_64 Initial commit of amazon-vpc-cni-k8s #1 SMP Fri Dec 9 12:54:21 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

@jdn5126
Copy link
Contributor

jdn5126 commented Feb 23, 2023

@ceastman-r7 as I understand it, the ingress and egress bandwidth units are in "bits per second", not "bytes per second". So if you multiply the values you pasted by 8, they are pretty close to the enforced limits. The goal for the CNI bandwidth plugin is for the average ingress/egress bandwidth to be around the specified limit and to prevent bursts much greater than the set limit.

@jdn5126 jdn5126 added question and removed bug labels Feb 23, 2023
@ceastman-r7
Copy link
Author

ah little b vs big B. that would explain the discrepancy.

@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants