Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add VS test cases for dynamic port cfg #2047

Conversation

tomer-israel
Copy link
Contributor

Signed-off-by: tomeri tomeri@nvidia.com

DEPENDS ON: Dynamic port configuration - add port buffer cfg to the port ref counter #2022

What I did
added a vs test to check functionality of add/remove ports dynamically on run time

Why I did it
in order to test add/remove ports functionality

How I verified it
test passed

Details if related

Signed-off-by: tomeri <tomeri@nvidia.com>
@tomer-israel
Copy link
Contributor Author

/azpw run

@mssonicbld
Copy link
Collaborator

/AzurePipelines run

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@zhenggen-xu
Copy link
Collaborator

Are you following the vs tests section in this doc? https://github.com/Azure/SONiC/pull/900/files , if so, it seems we should add more test cases.

@tomer-israel
Copy link
Contributor Author

Are you following the vs tests section in this doc? https://github.com/Azure/SONiC/pull/900/files , if so, it seems we should add more test cases.

I will update this section in the HLD.
Not all the cases I mentioned on the hld are implemented, some cases were problematic to add such as reboot test or error cases (hard to detect error messages in the log).

@dprital
Copy link
Collaborator

dprital commented Dec 12, 2021

@zhenggen-xu - The comments on this PR were addressed. Can you please approve this PR ?

assert len(num) == num_of_ports-len(ports)

# add port
time.sleep(i%3)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is this waiting for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added different timeouts between delete/create to catch potential race conditions that can lead to system crush.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add this as the comments in the code ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is already inside on line 9-10
but maybe it's better to put it next to the time.sleep

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i added those comments before the sleep

num_of_ports)
assert len(num) == num_of_ports

time.sleep((i%2)+1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the waiting here, and why it varies?

Copy link
Contributor Author

@tomer-israel tomer-israel Dec 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above:
different timeouts between delete/create can catch potential race conditions

rc = dvs.servers[17].runcmd("ping -c 1 6.6.6.6")
assert rc == 0

dvs.remove_vlan_member("6", "Ethernet68")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change the PORT status (like admin etc) to whatever before the test case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changing the ports bach back to admin down was added

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, we don't know if the port was admin down or not, I would think we should save the initial admin status, and then change to whatever it was before the test case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that on default it is down
I agree that the best thing to do is to save the initial admin status and change it back to the initial admin status in the end

Signed-off-by: tomeri <tomeri@nvidia.com>
@dprital
Copy link
Collaborator

dprital commented Dec 17, 2021

@zhenggen-xu - Can you please review and approve ?

@liat-grozovik
Copy link
Collaborator

@zhenggen-xu could you please help to review?

assert len(num) == num_of_ports-len(ports)

# add port
time.sleep(i%3)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add this as the comments in the code ?

dvs.setup_db()
dvs.create_vlan("6")
dvs.create_vlan_member("6", PORT)
dvs.create_vlan_member("6", "Ethernet68")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make Ethernet68 as defined constant too at beginning?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

rc = dvs.servers[17].runcmd("ping -c 1 6.6.6.6")
assert rc == 0

dvs.remove_vlan_member("6", "Ethernet68")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, we don't know if the port was admin down or not, I would think we should save the initial admin status, and then change to whatever it was before the test case.

- add comments before the sleep
- set a define for Ethernet68
@liat-grozovik
Copy link
Collaborator

@zhenggen-xu kindly reminder, please review recent changes following your feedback.

@liat-grozovik
Copy link
Collaborator

/azp run

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@zhenggen-xu
Copy link
Collaborator

@zhenggen-xu kindly reminder, please review recent changes following your feedback.

This is not addressed:
#2047 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants