Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IF: Disaster_recovery scenario 2 test #72

Merged
merged 16 commits into from
May 1, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions tests/disaster_recovery.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@
###############################################################
# disaster_recovery - Scenario 1
#
# Verify that if one node in network says locked blocks then consensus can continue.
linh2931 marked this conversation as resolved.
Show resolved Hide resolved
#
# Integration test with 4 finalizers (A, B, C, and D).
#
# The 4 nodes are cleanly shutdown in the following state:
Expand Down
11 changes: 7 additions & 4 deletions tests/disaster_recovery_2.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@
###############################################################
# disaster_recovery - Scenario 2
#
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the detailed description here. It will be even better for each of scenarios to have one sentence describing what the purpose of the test.

I think our new tests should follow this test's pattern.

# Verify that if finalizers are only locked on LIB blocks then all reversable blocks in the network can be lost
# and consensus can continue.
#
# Integration test with 5 nodes (A, B, C, D, and P). Nodes A, B, C, and D each have one finalizer but no proposers.
# Node P has a proposer but no finalizers. The finalizer policy consists of the four finalizers with a threshold of 3.
# The proposer policy involves just the single proposer P.
Expand Down Expand Up @@ -84,24 +87,24 @@
Print(f"Snapshot head block number {ret_head_block_num}")

Print("Wait for snapshot node lib to advance")
node0.waitForBlock(ret_head_block_num+1, blockType=BlockType.lib)
assert node0.waitForBlock(ret_head_block_num+1, blockType=BlockType.lib), "Node0 did not advance to make snapshot block LIB"
assert node1.waitForLibToAdvance(), "Ndoe1 did not advance LIB after snapshot of Node0"

assert node0.waitForLibToAdvance(), "Node0 did not advance LIB after snapshot"
linh2931 marked this conversation as resolved.
Show resolved Hide resolved

Print("Pause production on Node0")
lib = node0.getIrreversibleBlockNum()
ret_json = node0.processUrllibRequest("producer", "pause")
node0.processUrllibRequest("producer", "pause")
# wait for lib because waitForBlock uses > not >=
assert node0.waitForBlock(lib, blockType=BlockType.lib), "Node0 did not advance LIB after pause"
time.sleep(1)

Print("Disconnect the producing node (Node0) from peer Node1")
ret_json = node0.processUrllibRequest("net", "disconnect", "localhost:9877")
node0.processUrllibRequest("net", "disconnect", "localhost:9877")
assert not node0.waitForLibToAdvance(timeout=10), "Node0 LIB still advancing after disconnect"

Print("Resume production on Node0")
ret_json = node0.processUrllibRequest("producer", "resume")
node0.processUrllibRequest("producer", "resume")
assert node0.waitForHeadToAdvance(blocksToAdvance=2)
libN = node0.getIrreversibleBlockNum()

greg7mdp marked this conversation as resolved.
Show resolved Hide resolved
Expand Down
Loading