Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry approval on availability failure if the check is still needed #6807

Merged
merged 10 commits into from
Jan 14, 2025

Conversation

alexggh
Copy link
Contributor

@alexggh alexggh commented Dec 9, 2024

Recovering the POV can fail in situation where the node just restart and the DHT topology wasn't fully discovered yet, so the current node can't connect to most of its Peers. This is bad because for gossiping the assignment you need to be connected to just a few peers, so because we can't approve the candidate and other nodes will see this as a no show.

This becomes bad in the scenario where you've got a lot of nodes restarting at the same time, so you end up having a lot of no-shows in the network that are never covered, in that case it makes sense for nodes to actually retry approving the candidate at a later data in time and retry several times if the block containing the candidate wasn't approved.

TODO

  • Add a subsystem test.

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
@alexggh alexggh requested review from sandreim, ordian, eskimor and AndreiEres and removed request for sandreim December 9, 2024 16:27
@Polkadot-Forum
Copy link

This pull request has been mentioned on Polkadot Forum. There might be relevant details there:

https://forum.polkadot.network/t/2025-11-25-kusama-parachains-spammening-aftermath/11108/1

@burdges
Copy link

burdges commented Dec 11, 2024

Aside: JAM puts validators' IP etc on-chain. We could figure out if on-chain is better? Ideally we'd want on-chain to equivocation defesnes anyways.

Copy link
Contributor

@AndreiEres AndreiEres left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this somehow interfere in normal mode?

polkadot/node/core/approval-voting/src/lib.rs Show resolved Hide resolved
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
@alexggh alexggh force-pushed the alexggh/approval_voting_retry branch from 5835565 to 1346e76 Compare December 20, 2024 15:22
Copy link
Contributor

@alindima alindima left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ordian
Copy link
Member

ordian commented Dec 23, 2024

Alternatively, we could change the

to try connect if we are disconnected. This doesn't have a more granular retry functionality, but a much simpler change.

@alindima
Copy link
Contributor

Alternatively, we could change the

to try connect if we are disconnected. This doesn't have a more granular retry functionality, but a much simpler change.

We could do this, but I doubt this will make a difference. If the strategy of fetching from backers fails, we fall back to chunk fetching which uses IfDisconnected::TryConnect.

Alex is saying that:

current node can't connect to most of its Peers.

If it were just an issue of not trying to connect, then the chunk recovery should have worked

@ordian
Copy link
Member

ordian commented Jan 6, 2025

Aside: JAM puts validators' IP etc on-chain. We could figure out if on-chain is better? Ideally we'd want on-chain to equivocation defesnes anyways.

Agree, we should explore this. This would also solve the problem with restarts and empty DHT cache.

Copy link
Member

@ordian ordian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving it, but it feels like a workaround for an unreliable/slow authority discovery, which we should fix and then remove this workaround.

@alexggh
Copy link
Contributor Author

alexggh commented Jan 6, 2025

Approving it, but it feels like a workaround for an unreliable/slow authority discovery, which we should fix and then remove this workaround.

I did discussed with @lexnv about speeding authority-discovery by saving the cache on disk to have it available on restart, however even with a speedy authority discovery, I still think this would be a good thing to have for robustness, because it is a fallback when networking calls fail from various other reasons, network calls are expected to fail every now and then.

Copy link
Contributor

@sandreim sandreim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! @ordian proposed an alternative solution which should live in AD, but I agree that this fix adds some robustness on top of AD.

From the back of my head, a better place to implement PoV retry would is in the availability recovery subsystem. This would be superior in terms of latency (it's push vs pull/poll) as this subsystem could keep track of all these PoVs that failed to be retrieved and retry immediately to fetch chunks as soon as peers connect (or even more complex strategies - like speculative availability).

The only change needed in approval voting would be to notify av recovery that it's no longer interested in some PoV.

WDYT ?

polkadot/node/core/approval-voting/src/lib.rs Outdated Show resolved Hide resolved
core_index,
session_index,
attempts_remaining: retry.attempts_remaining - 1,
backoff: retry.backoff,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As time passes the chances of connecting to enough peers increases, wouldn't it make sense to decrease the back-off as the retry count increases ? This would help approve candidate faster.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reconnecting to peers is in the oder of minutes, so we wouldn't gain much by reducing the backoff, also usually with backoffs you actually want to increase it as the number of attempts increase because you don't want to end up in a situation where you many failed attempts start stampeding, which makes things worse, however we can't increase it here because of this: #6807 (comment), so I think 1min is an acceptable compromise.

@alindima
Copy link
Contributor

alindima commented Jan 7, 2025

From the back of my head, a better place to implement PoV retry would is in the availability recovery subsystem. This would be superior in terms of latency (it's push vs pull/poll) as this subsystem could keep track of all these PoVs that failed to be retrieved and retry immediately to fetch chunks as soon as peers connect (or even more complex strategies - like speculative availability).

The only change needed in approval voting would be to notify av recovery that it's no longer interested in some PoV.

I don't think this is particularly better. Latency of transmitting messages between subsystems should be negligible.
av-recovery is being used by multiple actors: dispute-coordinator, approval-voting and collator pov-recovery. I doubt that they all need the same amount of retries and in the same scenarios. In addition, I'd like to keep this separation of concerns that we have now. av-recovery is tasked with recovering the available data. If it's not possible, higher level code decides what needs to be done

@sandreim
Copy link
Contributor

sandreim commented Jan 7, 2025

I don't think this is particularly better. Latency of transmitting messages between subsystems should be negligible. av-recovery is being used by multiple actors: dispute-coordinator, approval-voting and collator pov-recovery.

Yes, but why is this a bad thing ? If PoV recovery fails dispute-coordinator currently gives up and it would be more robust to use a retry mechanism.

I doubt that they all need the same amount of retries and in the same scenarios. In addition, I'd like to keep this separation of concerns that we have now. av-recovery is tasked with recovering the available data.

My proposal doesn't change the concerns of av-recovery, but makes it more robust. It should be easy to pass the retry params in the RecoverAvailableData message.

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
@alexggh
Copy link
Contributor Author

alexggh commented Jan 7, 2025

From the back of my head, a better place to implement PoV retry would is in the availability recovery subsystem. This would be superior in terms of latency (it's push vs pull/poll) as this subsystem could keep track of all these PoVs that failed to be retrieved and retry immediately to fetch chunks as soon as peers connect (or even more complex strategies - like speculative availability).

The only change needed in approval voting would be to notify av recovery that it's no longer interested in some PoV.

I don't think this is particularly better. Latency of transmitting messages between subsystems should be negligible. av-recovery is being used by multiple actors: dispute-coordinator, approval-voting and collator pov-recovery. I doubt that they all need the same amount of retries and in the same scenarios. In addition, I'd like to keep this separation of concerns that we have now. av-recovery is tasked with recovering the available data. If it's not possible, higher level code decides what needs to be done

Alright, I looked a bit on this and moving it in the availability-recovery wouldn't be a straight-forward task, the reason for this is because the implementers of RecoveryStrategy are stateful, consumed and can't be cloned, so you can't simply re-run the strategies from here:

while let Some(current_strategy) = self.strategies.pop_front() {
, so you would end-up having to propagate the failure here:
output = state.ongoing_recoveries.select_next_some() => {
and having to re-run the logic for building the recovery-strategy from there, that means that you also have to carry along the parameters needed for rebuilding the strategies array.

I don't think that would be better and easier and less risky to understand and implement than the current proposed approach, but it would give us the benefit that disputes could also opt-in to use it.

Given the current PR improves the situation, I would be inclined to keep the current approach rather than invest in getting this knob in availability-recovery, let me know what you think!

@paritytech-workflow-stopper
Copy link

All GitHub workflows were cancelled due to failure one of the required jobs.
Failed workflow url: https://github.com/paritytech/polkadot-sdk/actions/runs/12654933804
Failed job name: cargo-clippy

@sandreim
Copy link
Contributor

sandreim commented Jan 7, 2025

Alright, I looked a bit on this and moving it in the availability-recovery wouldn't be a straight-forward task, the reason for this is because the implementers of RecoveryStrategy are stateful, consumed and can't be cloned, so you can't simply re-run the strategies from here:

while let Some(current_strategy) = self.strategies.pop_front() {

, so you would end-up having to propagate the failure here:

output = state.ongoing_recoveries.select_next_some() => {

and having to re-run the logic for building the recovery-strategy from there, that means that you also have to carry along the parameters needed for rebuilding the strategies array.
I don't think that would be better and easier and less risky to understand and implement than the current proposed approach, but it would give us the benefit that disputes could also opt-in to use it.

Given the current PR improves the situation, I would be inclined to keep the current approach rather than invest in getting this knob in availability-recovery, let me know what you think!

The only major downside of the current approach is that for each retry we re-fetch the same chunks from the previous attempt, so worse case scenario - we'd be roughly fetching the PoV 16 times, which increases the bandwidth cost.

@alindima
Copy link
Contributor

alindima commented Jan 8, 2025

From the back of my head, a better place to implement PoV retry would is in the availability recovery subsystem. This would be superior in terms of latency (it's push vs pull/poll) as this subsystem could keep track of all these PoVs that failed to be retrieved and retry immediately to fetch chunks as soon as peers connect (or even more complex strategies - like speculative availability).

The only change needed in approval voting would be to notify av recovery that it's no longer interested in some PoV.

I don't think this is particularly better. Latency of transmitting messages between subsystems should be negligible. av-recovery is being used by multiple actors: dispute-coordinator, approval-voting and collator pov-recovery. I doubt that they all need the same amount of retries and in the same scenarios. In addition, I'd like to keep this separation of concerns that we have now. av-recovery is tasked with recovering the available data. If it's not possible, higher level code decides what needs to be done

Alright, I looked a bit on this and moving it in the availability-recovery wouldn't be a straight-forward task, the reason for this is because the implementers of RecoveryStrategy are stateful, consumed and can't be cloned, so you can't simply re-run the strategies from here:

while let Some(current_strategy) = self.strategies.pop_front() {

, so you would end-up having to propagate the failure here:

output = state.ongoing_recoveries.select_next_some() => {

and having to re-run the logic for building the recovery-strategy from there, that means that you also have to carry along the parameters needed for rebuilding the strategies array.
I don't think that would be better and easier and less risky to understand and implement than the current proposed approach, but it would give us the benefit that disputes could also opt-in to use it.

Given the current PR improves the situation, I would be inclined to keep the current approach rather than invest in getting this knob in availability-recovery, let me know what you think!

We could modify the RecoveryTask to retry the last strategy if it fails with an Unavailable error, up to a configurable X amount of retries. Or just chain those X extra strategies on the strategies: VecDeque<Box<dyn RecoveryStrategy<Sender>>>, that's part of the task.
The only param that needs to be recreated for the FetchChunks strategy is the number of validators.

I'm not sure however if this would end up being easier to implement/reason about. Btw, we're doing a similar thing in collators for pov-recovery:

if self.candidates_in_retry.insert(block_hash) {
We only retry once here and with no extra sleeps. It could indeed help us deduplicating this code and has the benefit that Andrei mentioned (that already fetched chunks wouldn't be re-requested), but I'm personally also fine with logging an issue about this and handling it as a refactor if we get the time and it indeed turns out that the code is nice enough

@alexggh
Copy link
Contributor Author

alexggh commented Jan 8, 2025

Alright, I looked a bit on this and moving it in the availability-recovery wouldn't be a straight-forward task, the reason for this is because the implementers of RecoveryStrategy are stateful, consumed and can't be cloned, so you can't simply re-run the strategies from here:

Another reason, why we can't isolate this logic just in availability-recovery is this

match work.timeout(APPROVAL_CHECKING_TIMEOUT).await {
, which would make the future calling into availability-recovery timeout after only 2 minutes, so further refactoring in approval-voting would also be needed to use that.

It could indeed help us deduplicating this code and has the benefit that Andrei mentioned (that already fetched chunks wouldn't be re-requested), but I'm personally also fine with logging an issue about this and handling it as a refactor if we get the time and it indeed turns out that the code is nice enough

Given, that I would want to have this retry in approval-voting rather sooner than later, I'm also in favour of logging an issue for this generic mechanism and merge the PR, considering the downsides are not that critical in my opinion.

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
@sandreim
Copy link
Contributor

sandreim commented Jan 8, 2025

It could indeed help us deduplicating this code and has the benefit that Andrei mentioned (that already fetched chunks wouldn't be re-requested), but I'm personally also fine with logging an issue about this and handling it as a refactor if we get the time and it indeed turns out that the code is nice enough

Given, that I would want to have this retry in approval-voting rather sooner than later, I'm also in favour of logging an issue for this generic mechanism and merge the PR, considering the downsides are not that critical in my opinion.

Sounds good, let's go 🚀

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
@alexggh alexggh added the A4-needs-backport Pull request must be backported to all maintained releases. label Jan 13, 2025
Copy link
Contributor

@lexnv lexnv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! 👍

@alexggh alexggh added this pull request to the merge queue Jan 14, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jan 14, 2025
@alexggh alexggh enabled auto-merge January 14, 2025 14:16
@alexggh alexggh added this pull request to the merge queue Jan 14, 2025
Merged via the queue into master with commit 6878ba1 Jan 14, 2025
200 of 205 checks passed
@alexggh alexggh deleted the alexggh/approval_voting_retry branch January 14, 2025 15:28
@paritytech-cmd-bot-polkadot-sdk

Created backport PR for stable2407:

Please cherry-pick the changes locally and resolve any conflicts.

git fetch origin backport-6807-to-stable2407
git worktree add --checkout .worktree/backport-6807-to-stable2407 backport-6807-to-stable2407
cd .worktree/backport-6807-to-stable2407
git reset --hard HEAD^
git cherry-pick -x 6878ba1f399b628cf456ad3abfe72f2553422e1f
git push --force-with-lease

@paritytech-cmd-bot-polkadot-sdk

Created backport PR for stable2409:

Please cherry-pick the changes locally and resolve any conflicts.

git fetch origin backport-6807-to-stable2409
git worktree add --checkout .worktree/backport-6807-to-stable2409 backport-6807-to-stable2409
cd .worktree/backport-6807-to-stable2409
git reset --hard HEAD^
git cherry-pick -x 6878ba1f399b628cf456ad3abfe72f2553422e1f
git push --force-with-lease

@paritytech-cmd-bot-polkadot-sdk

Successfully created backport PR for stable2412:

github-actions bot pushed a commit that referenced this pull request Jan 14, 2025
…6807)

Recovering the POV can fail in situation where the node just restart and
the DHT topology wasn't fully discovered yet, so the current node can't
connect to most of its Peers. This is bad because for gossiping the
assignment you need to be connected to just a few peers, so because we
can't approve the candidate and other nodes will see this as a no show.

This becomes bad in the scenario where you've got a lot of nodes
restarting at the same time, so you end up having a lot of no-shows in
the network that are never covered, in that case it makes sense for
nodes to actually retry approving the candidate at a later data in time
and retry several times if the block containing the candidate wasn't
approved.

## TODO
- [x] Add a subsystem test.

---------

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
(cherry picked from commit 6878ba1)
EgorPopelyaev pushed a commit that referenced this pull request Jan 15, 2025
Backport #6807 into `stable2409` from alexggh.

See the
[documentation](https://github.com/paritytech/polkadot-sdk/blob/master/docs/BACKPORT.md)
on how to use this bot.

<!--
  # To be used by other automation, do not modify:
  original-pr-number: #${pull_number}
-->

---------

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: Alexandru Gheorghe <49718502+alexggh@users.noreply.github.com>
Co-authored-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
EgorPopelyaev pushed a commit that referenced this pull request Jan 15, 2025
Backport #6807 into `stable2412` from alexggh.

See the
[documentation](https://github.com/paritytech/polkadot-sdk/blob/master/docs/BACKPORT.md)
on how to use this bot.

<!--
  # To be used by other automation, do not modify:
  original-pr-number: #${pull_number}
-->

Co-authored-by: Alexandru Gheorghe <49718502+alexggh@users.noreply.github.com>
EgorPopelyaev pushed a commit that referenced this pull request Jan 15, 2025
Backport #6807 into `stable2407` from alexggh.

See the
[documentation](https://github.com/paritytech/polkadot-sdk/blob/master/docs/BACKPORT.md)
on how to use this bot.

<!--
  # To be used by other automation, do not modify:
  original-pr-number: #${pull_number}
-->

---------

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: Alexandru Gheorghe <49718502+alexggh@users.noreply.github.com>
Co-authored-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
ordian added a commit that referenced this pull request Jan 16, 2025
* master: (33 commits)
  Implement `pallet-asset-rewards` (#3926)
  [pallet-revive] Add host function `to_account_id` (#7091)
  [pallet-revive] Remove revive events (#7164)
  [pallet-revive] Remove debug buffer (#7163)
  litep2p: Provide partial results to speedup GetRecord queries (#7099)
  [pallet-revive] Bump asset-hub westend spec version (#7176)
  Remove 0 as a special case in gas/storage meters (#6890)
  [pallet-revive] Fix `caller_is_root` return value (#7086)
  req-resp/litep2p: Reject inbound requests from banned peers (#7158)
  Add "run to block" tools (#7109)
  Fix reversed error message in DispatchInfo (#7170)
  approval-voting: Make importing of duplicate assignment idempotent (#6971)
  Parachains: Use relay chain slot for velocity measurement (#6825)
  PRDOC: Document `validate: false` (#7117)
  xcm: convert properly assets in xcmpayment apis (#7134)
  CI: Only format umbrella crate during umbrella check (#7139)
  approval-voting: Fix sending of assignments after restart (#6973)
  Retry approval on availability failure if the check is still needed (#6807)
  [pallet-revive-eth-rpc] persist eth transaction hash (#6836)
  litep2p: Sufix litep2p to the identify agent version for visibility (#7133)
  ...
Nathy-bajo pushed a commit to Nathy-bajo/polkadot-sdk that referenced this pull request Jan 21, 2025
…aritytech#6807)

Recovering the POV can fail in situation where the node just restart and
the DHT topology wasn't fully discovered yet, so the current node can't
connect to most of its Peers. This is bad because for gossiping the
assignment you need to be connected to just a few peers, so because we
can't approve the candidate and other nodes will see this as a no show.

This becomes bad in the scenario where you've got a lot of nodes
restarting at the same time, so you end up having a lot of no-shows in
the network that are never covered, in that case it makes sense for
nodes to actually retry approving the candidate at a later data in time
and retry several times if the block containing the candidate wasn't
approved.

## TODO
- [x] Add a subsystem test.

---------

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A4-needs-backport Pull request must be backported to all maintained releases.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants