-
Notifications
You must be signed in to change notification settings - Fork 798
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix erroneously double incremented and decremented consumers #2037
Comments
2 will be useful and we can easily convert it into a try_runtime invariant to ensure the consistency in future |
…ng consumers (paritytech#1976) Closes paritytech#1970 Follow up issue to tackle, once the erroneous double incrementing/decrementing has stopped: paritytech#2037
…ng consumers (paritytech#1976) Closes paritytech#1970 Follow up issue to tackle, once the erroneous double incrementing/decrementing has stopped: paritytech#2037
Has there been any progress/update to this? We have recently found pools are failing to be destroyed because of this bug. |
I think that it would help to have #4398 deployed before we start starting migrations for the references... |
can I take this issue? @ggwpez @liamaharon |
I think there is no clear forward yet. I guess the |
I think the way forward is to scrape bad |
Could work, but we have this issue not just on Polkadot - but presumably on all chains... |
Yes, write a script that replays transactions and checks after every lock/unlock whether there was an erroneous increment/decrement. Then it crafts a migration which can be inserted into the runtime to fix them. |
…ce on the pool account (#4503) addresses #4440 (will close once we have this in prod runtimes). related: #2037. An extra consumer reference is preventing pools to be destroyed. When a pool is ready to be destroyed, we can safely clear the consumer references if any. Notably, I only check for one extra consumer reference since that is a known bug. Anything more indicates possibly another issue and we probably don't want to silently absorb those errors as well. After this change, pools with extra consumer reference should be able to destroy normally.
…ce on the pool account (#4503) addresses #4440 (will close once we have this in prod runtimes). related: #2037. An extra consumer reference is preventing pools to be destroyed. When a pool is ready to be destroyed, we can safely clear the consumer references if any. Notably, I only check for one extra consumer reference since that is a known bug. Anything more indicates possibly another issue and we probably don't want to silently absorb those errors as well. After this change, pools with extra consumer reference should be able to destroy normally.
…ce on the pool account (paritytech#4503) addresses paritytech#4440 (will close once we have this in prod runtimes). related: paritytech#2037. An extra consumer reference is preventing pools to be destroyed. When a pool is ready to be destroyed, we can safely clear the consumer references if any. Notably, I only check for one extra consumer reference since that is a known bug. Anything more indicates possibly another issue and we probably don't want to silently absorb those errors as well. After this change, pools with extra consumer reference should be able to destroy normally.
Is there any news about allowing currently 'destroying' pools to be destroyed? |
#4503 i suppose. Should be in the SDK 1.13 release. |
Pools can be destroyed now. I believe this issue can be closed. Example: 14E8ZGhchDwhB3igyEudWJ5j3gPGKwT5ki3yStUy3XbgdkwX |
There was a fix for pools: #4503 but the general issue is not addressed by this. |
…ce on the pool account (paritytech#4503) addresses paritytech#4440 (will close once we have this in prod runtimes). related: paritytech#2037. An extra consumer reference is preventing pools to be destroyed. When a pool is ready to be destroyed, we can safely clear the consumer references if any. Notably, I only check for one extra consumer reference since that is a known bug. Anything more indicates possibly another issue and we probably don't want to silently absorb those errors as well. After this change, pools with extra consumer reference should be able to destroy normally.
I'm curious to know how progress on this issue is coming along. We've had several users reach out about being unable to empty their accounts fully, and I'd like to provide them with an update. Any information would be helpful! |
The only thing remaining is the ED? There is no straight forward fix for this historic issue. Fixing this properly would require quite some effort - i think more than we currently have available. Maybe @kianenigma has some ideas. |
Can you give us an idea of how many users will be affected by this, and what is the total amount of DOTs that will be "locked" while this is unfixed? You can probably do this by writing a script that scans all accounts, and checks those that cannot be killed now. This will help us prioritize this properly. |
I'm more than happy to communicate the solution to affected users, but locating every affected account may not fall entirely within my scope. But let me know if I can help in any other way. |
Needs to be implemented once #1976 is merged and released.
Two options have been floated:
fn fix_consumers(origin, who: AccountId)
in the System pallet, with an additionalframe_system::Config::CountConsumers
trait with a functionfn count_consumers(who: &AccountId) -> u32
, which could possibly be written on a per-pallet basis and composed in a tuple and used by AllPalletsWithSystem`.also see #1970
The text was updated successfully, but these errors were encountered: