You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, for Blocksync, we're using a static request length (10) (the number of blocks requested at once).
It would be better to have dynamic requests (i.e. make longer requests when we can successfully, or make shorter requests when they fail)
For example: Lotus starts with 500 and if request fails, they shrink request size.
Acceptance Criteria
The Blocksync request lengths are dynamic and adjust (i.e. get bigger or smaller) based on what's feasible with current peers and network conditions.
The text was updated successfully, but these errors were encountered:
To clarify, Lotus has a default window size of 500, and they send the request to 16 peers before changing window size (used to be 5, so this should also change on our side, assuming there is a good reason for this).
Ideally it would be good to benchmark this on our own client, and make sure that we have sufficient enough channel sizes through libp2p to handle receiving 500 tipset headers over the network (it is a lot)
After discussing with @austinabell rather than having a dynamic window size as described above, I will be implementing a sync configuration that will sit within the chainsyncer where users can update the window size via command line as well as the number worker tasks related to syncing.
Issue summary
Currently, for Blocksync, we're using a static request length (10) (the number of blocks requested at once).
It would be better to have dynamic requests (i.e. make longer requests when we can successfully, or make shorter requests when they fail)
For example: Lotus starts with 500 and if request fails, they shrink request size.
Acceptance Criteria
The Blocksync request lengths are dynamic and adjust (i.e. get bigger or smaller) based on what's feasible with current peers and network conditions.
The text was updated successfully, but these errors were encountered: