-
Notifications
You must be signed in to change notification settings - Fork 558
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Dynamic buffer calc] Bug fix: Remove PGs from an administratively down port. #1652
Conversation
Bug: The buffermgrd can keep adding suffix to the buffer pool reference if the buffer pool isn't found when it is being referenced. In most of the cases, it's caused by wrong configuration. Cause: In handleBufferProfileTable, the value of each field is designated by a (lvalue) reference to the object in the tuple, which means the object in the tuple will be changed if the value is changed in the function. Fix: Pass the value of each field by value instead of reference. Signed-off-by: Stephen Sun <stephens@nvidia.com>
Introduce a new state: `PORT_ADMIN_DOWN` which represents the port is administratively down. Remove all PGs when the port is shut down and re-add all configured PGs when the port is started up Only record the new value but don't touch `BUFFER_PG_TABLE` if the following events come when a port is administratively down - a port's MTU, speed, or cable length is updated - a new PG is added to a port or an existing PG is removed from a port Optimize the port event handling flow since `refreshPriorityGroupsForPort` should be called only once in case more than one fields are updated Optimize the Lua plugin which calculates the buffer pool size accordingly Signed-off-by: Stephen Sun <stephens@nvidia.com>
f5bb63c
to
6566369
Compare
Identify the case that the referenced profile doesn't exist and exit the plugin gracefully. Signed-off-by: Stephen Sun <stephens@nvidia.com>
Signed-off-by: Stephen Sun <stephens@nvidia.com>
/AzurePipelines run |
Azure Pipelines successfully started running 1 pipeline(s). |
VS test failed on the following cases. Can anyone help to check them?
|
/AzurePipelines run |
Azure Pipelines successfully started running 1 pipeline(s). |
@neethajohn - Can you please approve and merge ? it is also required for 202012. Thanks, |
@neethajohn - Kindly reminder. Thanks. |
Signed-off-by: Stephen Sun <stephens@nvidia.com>
/AzurePipelines run |
Commenter does not have sufficient privileges for PR 1652 in repo Azure/sonic-swss |
Signed-off-by: Stephen Sun <stephens@nvidia.com>
@daall - Can you please merge to 202012 ? Thanks. |
We are waiting for #1630 to be merged first. |
…wn port. (#1652) Remove PGs from an administratively down port. - Introduce a new state: PORT_ADMIN_DOWN which represents the port is administratively down. - Remove all PGs when the port is shut down and re-add all configured PGs when port is started up - Only record the new value but don't touch BUFFER_PG_TABLE if the following events come when a port is administratively down, a port's MTU, speed, or cable length is updated, a new PG is added to a port or an existing PG is removed from a port - Optimize the port event handling flow since refreshPriorityGroupsForPort should be called only once in case more than one fields are updated - Optimize the Lua plugin which calculates the buffer pool size according Signed-off-by: Stephen Sun stephens@nvidia.com How I verified it Run regression and vs test
how come this bug fix is so large. is this really a bug fix? @daall , let's not merge this into 202012 for now. |
@stephenxs , I am reverting this merge. Please break this PR into multiple PRs of smaller commits - optimizations, port down handling, renaming func/log messages etc |
OK. Will do. |
…wn port. (sonic-net#1652) Remove PGs from an administratively down port. - Introduce a new state: PORT_ADMIN_DOWN which represents the port is administratively down. - Remove all PGs when the port is shut down and re-add all configured PGs when port is started up - Only record the new value but don't touch BUFFER_PG_TABLE if the following events come when a port is administratively down, a port's MTU, speed, or cable length is updated, a new PG is added to a port or an existing PG is removed from a port - Optimize the port event handling flow since refreshPriorityGroupsForPort should be called only once in case more than one fields are updated - Optimize the Lua plugin which calculates the buffer pool size according Signed-off-by: Stephen Sun stephens@nvidia.com How I verified it Run regression and vs test
…ively down port. (sonic-net#1652)" (sonic-net#1676) This reverts commit 908e0c6.
What I did
Bug fixes: Remove PGs from an administratively down port.
Signed-off-by: Stephen Sun stephens@nvidia.com
Why I did it
To fix bugs
How I verified it
Run regression and vs test
Which release branch to backport (provide reason below if selected)
Details if related
PORT_ADMIN_DOWN
which represents the port is administratively down.BUFFER_PG_TABLE
if the following events come when a port is administratively downrefreshPriorityGroupsForPort
should be called only once in case more than one fields are updated