Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update scaling test TE-14.1 #3556

Merged
merged 9 commits into from
Jan 6, 2025
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 17 additions & 22 deletions feature/gribi/otg_tests/gribi_scaling/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,29 +13,24 @@ Validate gRIBI scaling requirements.
* On DUT, create a policy-based forwarding rule to redirect all traffic received from DUT port-1 into VRF-1 (based on src. IP match criteria)
* Establish gRIBI client connection with DUT negotiating FIBACK as the
requested ack_type and make it become leader.
* Using gRIBI Modify RPC install the following IPv4Entry sets, and validate
* TODO: Using gRIBI Modify RPC install the following IPv4Entry sets, and validate
the specified behaviours:
* <Default VRF> IPv4Entries -> NHG -> Multiple NH.
* Inject IPv4Entries(IPBlockDefaultVRF: 198.18.196.1/22) in default
VRF
* Install 64 L3 sub-interfaces IP to NextHopGroup containing one
NextHop specified to ATE port-2.
* Validate that the entries are installed as FIB_PROGRAMMED
* <VRF1> IPv4Entries -> Multiple NHG -> Multiple NH.
* Inject IPv4Entries(IPBlock1: "198.18.0.1/18") in VRF1.
* Install 1000 IPs from IPBlockDefaultVRF to 10 NextHopGroups
containing 100 NextHops each
* Validate that the entries are installed as FIB_PROGRAMMED
* <VRF2> IPv4Entries -> Multiple NHG -> Multiple NH.
* Inject IPv4Entries(IPBlock2: "198.18.64.1/18") in VRF2.
* Install *repeat* 17.5K NH from 1K /32 from IPBlockDefaultVRF to 35
NextHopGroups containing 45 NextHops each
* Validate that the entries are installed as FIB_PROGRAMMED
* <VRF3> IPv4Entries -> Multiple NHG -> Multiple NH.
* Inject IPv4Entries(IPBlock3: "198.18.128.1/18") in VRF3.
* Install IPiniP decap-then-encap to 500 first /32 from <IPBlockVRF1>
to 500 NextHopGroups containing 1 NextHop each
* Validate that the entries are installed as FIB_PROGRAMMED
* <Default VRF>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The bullet format/indent seems messed up (see preview).

* A) Install 400 NextHops, egressing out different interfaces.
* B) Install 200 NextHopGroups. Each points at 8 NextHops from A).
* C) Install 200 IPv4 Entries, each pointing at a unique NHG (1:1) from B.
* D) Install 200 NextHops. Each will redirect to an IP from C).
* E) Install 100 NextHopGroups. Each will contain 2 NextHops from D). The backup next_hop_group will be to redirect to VRF2.
* F) Install 100 NextHopGroups. Each will contain 2 NextHops from D). The backup next_hop_group will be to decap and redirect to DEFAULT vrf.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we are using same final NHs ( in this case it will resolve to A) then how are we planning to switch to backup path in VRF1 and validate the traffic with backup path.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for now this is fine; we test backup path actuation, this is to ensure the scale is proper. If we need to later, we can split B) into two groups.

* G) Install 700 NextHops. Each will decaps + reencap to an IP in VRF2.
* H) Install 700 NextHopGroups. Each will point to a NextHop from G) and have a backup next_hop_group to decap and redirect to DEFAULT vrf.
* <VRF1>
* Install 9000 IPv4Entries. Each points to a NextHopGroup from E).
* <VRF2>
* Install 9000 IPv4Entries (Same IPAddress as VRF1). Each points to a NextHopGroup from F).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we are using same IP address as VRF1 means , are we not planning to same destination ip as received in encapped packet?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the VRF procedure - there should be different bheaviour based on the SRC ip. Let me know if this is unclear.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure that is correct if we directly hit VRF2 through PBF match . but if we want to validate VRF1 backup path and make sure that really dst_ip and src_ip has been updated as per encap then we need to have different prefixes in VRF2.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this test I don't think we want to validate the repaired path; we have more-specific tests for that.

* <VRF2>
* Install 9000 IPv4Entries (Same IPAddress as VRF1). Each points to a NextHopGroup from G)
* Validate that each entry above are installed as FIB_PROGRAMMED.
* TODO: Add flows destinating to IPBlocks and ensure ATEPort2 receives it with
no loss

Expand Down
Loading