-
Notifications
You must be signed in to change notification settings - Fork 205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/refactor peers mbs #4024
Feat/refactor peers mbs #4024
Conversation
Codecov Report
@@ Coverage Diff @@
## development #4024 +/- ##
===============================================
+ Coverage 75.22% 75.30% +0.08%
===============================================
Files 613 618 +5
Lines 81922 82430 +508
===============================================
+ Hits 61627 62076 +449
- Misses 15623 15663 +40
- Partials 4672 4691 +19
Continue to review full report at Codecov.
|
…validatorInfoInterceptorProcessor, validatorInfoResolver
…-feat-refactor-peers-mbs-2022.05.02
…tor-peers-mbs-2022.05.02 Merge dev into feat refactor peers mbs 2022.05.02
Interceptor + Resolver
…eceived validator info
…processor anymore
Resolver + interceptor integration
…tor-peers-mbs Unit tests for feat/refactor-peers-mbs
# Conflicts: # common/enablers/enableEpochsHandler.go # common/enablers/epochFlags.go # common/interface.go # process/errors.go # sharding/mock/enableEpochsHandlerMock.go # sharding/nodesCoordinator/indexHashedNodesCoordinator.go # testscommon/dataRetriever/poolFactory.go # testscommon/enableEpochsHandlerStub.go
…factor-peers-mbs Merge v1.4.0 into feat refactor peers mbs
…refactor-peers-mbs Fix linter issue in feat/refactor-peers-mbs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
System test passed.
Description of the reasoning behind the pull request (what feature was missing / how the problem was manifesting itself / what was the motive behind the refactoring)
In the old implementation, meta chain creates after each epoch a meta block named "start of epoch meta block", in which it adds all the information regarding: rewards, validators info and so on. This block contains something different and not so optimal when we talk about PeersMiniBlocks. In our system all mini blocks hold only the tx hashes inside them, just to keep the block body smaller and to improve its transportation over the network. The only place where this rule is not respected is in the PeersMiniBlocks contained in each start of epoch meta block. Here, we have inside of each peer mini block, the marshalled ValidatorInfo DTO instead of its hash. This conducted to a bigger size of each start of epoch meta block. Right now we have 3200 x marshalled validator info DTO, inside these peers mini blocks in each start of epoch meta block. Assuming that the validators number will grow over the time, also the size of the start of epoch meta block will increase more and more.
Proposed Changes
The scope of this feature is to change the structure/approach of the PeersMiniBlocks in each start of epoch meta block in the way we are using it right now in each other mini block in the system. So, inside them we should keep only the hash of the marshalled validator info DTO, and all the nodes in the network will receive on separate topic these validator info txs, or they could request them on dedicated topic created, using the hashes inside each PeerMiniBlock. For this new approach to be achieved, the feature needed to implement/change a lot of stuff, from dedicated interceptors/resolvers/storages/pools but also the creating/processing procedure, which was a breaking change. For this reason a new dedicated activation flag was created, just to preserve the backward compatibility (import-db, etc).
Testing procedure
We should test this feature as usual: We need #all-in with UPGRADE, so we can check that all the things will work as expected and no problems will occur before, in the transition epoch (from old approach to the new approach), and also after the new feature will be activated. We can try a lot of restarts before/during/after the transition epoch, and we also need a successful import-db, just to check and validate the fully backwards compatibility of this new feature.