Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable maintenance pallets for the testnet #1336

Merged
merged 7 commits into from
Dec 7, 2023
Merged

Conversation

boundless-forest
Copy link
Member

@boundless-forest boundless-forest commented Dec 4, 2023

This PR introduces pallet-tx-pause to the Pangolin network, one of the two maintenance pallets in the polkadot-sdk upstream repo. It allows to pause some dispatch calls in the case of emergency. It's worth noting the reviewers is the two origins and make sure they are secure enough before going into the production. As for why don't add pallet-safe-mode? Two reasons:

  1. In safe-mod, the enter call is permissionless, that means anyone who has enough fund over the Limitation can put the system into safe-mode state. I don't think this is a secure enough solution for our system now. If the max deposit value is not set to an appropriate value, your system is at risk.
  2. Based on the maintenance experience we've had so far, tx-pause is enough IMO. The most common senerios is that some dispatch calls are vulnerable, pausing these calls is enough.

@boundless-forest boundless-forest linked an issue Dec 6, 2023 that may be closed by this pull request
@boundless-forest boundless-forest marked this pull request as ready for review December 6, 2023 09:45
Copy link

github-actions bot commented Dec 6, 2023

Check 712cb62 crab-dev

Check runtime version

RuntimeVersion {
    spec_name: "Crab2",
    impl_name: "DarwiniaOfficialRust",
    authoring_version: 0,
-   spec_version: 6501,
+   spec_version: 6510,
    impl_version: 0,
    transaction_version: 0,
    state_version: 0,
}

Check runtime storage

+ Pallet: "ConvictionVoting"
- Pallet: "PhragmenElection"
+ Pallet: "Referenda"
- Pallet: "TechnicalMembership"
+ Pallet: "Whitelist"

Copy link

github-actions bot commented Dec 6, 2023

Check 712cb62 pangolin-dev

Check runtime version

RuntimeVersion {
    spec_name: "Pangolin2",
    impl_name: "DarwiniaOfficialRust",
    authoring_version: 0,
-   spec_version: 6502,
+   spec_version: 6510,
    impl_version: 0,
    transaction_version: 0,
    state_version: 0,
}

Check runtime storage

+ Pallet: "TxPause"

Copy link

github-actions bot commented Dec 6, 2023

Check 712cb62 pangoro-dev

Check runtime version

RuntimeVersion {
    spec_name: "Pangoro2",
    impl_name: "DarwiniaOfficialRust",
    authoring_version: 0,
-   spec_version: 6405,
+   spec_version: 6510,
    impl_version: 0,
    transaction_version: 0,
    state_version: 0,
}

Check runtime storage

Pallet AccountMigration
+ Entry: StorageEntryMetadata { name: "Accounts", modifier: Optional, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 41, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 3, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" [`frame_system::Account`] data.", "", " <https://github.dev/paritytech/substrate/blob/19162e43be45817b44c7d48e50d03f074f60fbf4/frame/system/src/lib.rs#L545>"] }
- Entry: StorageEntryMetadata { name: "Accounts", modifier: Optional, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 41, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 3, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" [`frame_system::Account`] data.", "", " <https://github.dev/paritytech/substrate/blob/19162e43be45817b44c7d48e50d03f074f60fbf4/frame/system/src/lib.rs#L545>"] }
+ Entry: StorageEntryMetadata { name: "Ledgers", modifier: Optional, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 41, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 312, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" [`darwinia_staking::Ledgers`] data."] }
- Entry: StorageEntryMetadata { name: "Ledgers", modifier: Optional, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 41, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 301, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" [`darwinia_staking::Ledgers`] data."] }

Pallet AuraExt
+ Entry: StorageEntryMetadata { name: "Authorities", modifier: Default, ty: Plain(UntrackedSymbol { id: 346, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" Serves as cache for the authorities.", "", " The authorities in AuRa are overwritten in `on_initialize` when we switch to a new session,", " but we require the old authorities to verify the seal when validating a PoV. This will", " always be updated to the latest AuRa authorities in `on_finalize`."] }
- Entry: StorageEntryMetadata { name: "Authorities", modifier: Default, ty: Plain(UntrackedSymbol { id: 333, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" Serves as cache for the authorities.", "", " The authorities in AuRa are overwritten in `on_initialize` when we switch to a new session,", " but we require the old authorities to verify the seal when validating a PoV. This will always", " be updated to the latest AuRa authorities in `on_finalize`."] }
+ Entry: StorageEntryMetadata { name: "SlotInfo", modifier: Optional, ty: Plain(UntrackedSymbol { id: 349, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" Current slot paired with a number of authored blocks.", "", " Updated on each block initialization."] }

Pallet DarwiniaStaking
+ Entry: StorageEntryMetadata { name: "AuthoredBlocksCount", modifier: Default, ty: Plain(UntrackedSymbol { id: 330, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0, 0, 0, 0, 0], docs: [" Number of blocks authored by the collator within current session."] }
+ Entry: StorageEntryMetadata { name: "ExposureCache0", modifier: Optional, ty: Map { hashers: [Twox64Concat], key: UntrackedSymbol { id: 0, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 327, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Exposure cache 0."] }
+ Entry: StorageEntryMetadata { name: "ExposureCache1", modifier: Optional, ty: Map { hashers: [Twox64Concat], key: UntrackedSymbol { id: 0, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 327, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Exposure cache 1."] }
+ Entry: StorageEntryMetadata { name: "ExposureCache2", modifier: Optional, ty: Map { hashers: [Twox64Concat], key: UntrackedSymbol { id: 0, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 327, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Exposure cache 2."] }
+ Entry: StorageEntryMetadata { name: "ExposureCacheStates", modifier: Default, ty: Plain(UntrackedSymbol { id: 325, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0, 1, 2], docs: [" Exposure cache states.", "", " To avoid extra DB RWs during new session, such as:", " ```nocompile", " previous = current;", " current = next;", " next = elect();", " ```", "", " Now, with data:", " ```nocompile", " cache1 == previous;", " cache2 == current;", " cache3 == next;", " ```", " Just need to shift the marker and write the storage map once:", " ```nocompile", " mark(cache3, current);", " mark(cache2, previous);", " mark(cache1, next);", " cache1 = elect();", " ```"] }
- Entry: StorageEntryMetadata { name: "Exposures", modifier: Optional, ty: Map { hashers: [Twox64Concat], key: UntrackedSymbol { id: 0, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 314, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Current stakers' exposure."] }
+ Entry: StorageEntryMetadata { name: "Ledgers", modifier: Optional, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 0, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 312, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" All staking ledgers."] }
- Entry: StorageEntryMetadata { name: "Ledgers", modifier: Optional, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 0, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 301, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" All staking ledgers."] }
- Entry: StorageEntryMetadata { name: "NextExposures", modifier: Optional, ty: Map { hashers: [Twox64Concat], key: UntrackedSymbol { id: 0, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 314, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Next stakers' exposure."] }
+ Entry: StorageEntryMetadata { name: "PendingRewards", modifier: Optional, ty: Map { hashers: [Twox64Concat], key: UntrackedSymbol { id: 0, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 6, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" All outstanding rewards since the last payment."] }
- Entry: StorageEntryMetadata { name: "RewardPoints", modifier: Default, ty: Plain(UntrackedSymbol { id: 317, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0, 0, 0, 0, 0], docs: [" Collator's reward points."] }

Pallet EcdsaAuthority
+ Entry: StorageEntryMetadata { name: "MessageRootToSign", modifier: Optional, ty: Plain(UntrackedSymbol { id: 354, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" The incoming message root waiting for signing."] }
- Entry: StorageEntryMetadata { name: "MessageRootToSign", modifier: Optional, ty: Plain(UntrackedSymbol { id: 340, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" The incoming message root waiting for signing."] }

Pallet ParachainInfo
+ Entry: StorageEntryMetadata { name: "ParachainId", modifier: Default, ty: Plain(UntrackedSymbol { id: 84, marker: PhantomData<fn() -> core::any::TypeId> }), default: [100, 0, 0, 0], docs: [] }
- Entry: StorageEntryMetadata { name: "ParachainId", modifier: Default, ty: Plain(UntrackedSymbol { id: 84, marker: PhantomData<fn() -> core::any::TypeId> }), default: [100, 0, 0, 0], docs: [] }

Pallet ParachainSystem
+ Entry: StorageEntryMetadata { name: "AggregatedUnincludedSegment", modifier: Optional, ty: Plain(UntrackedSymbol { id: 207, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" Storage field that keeps track of bandwidth used by the unincluded segment along with the", " latest the latest HRMP watermark. Used for limiting the acceptance of new blocks with", " respect to relay chain constraints."] }
+ Entry: StorageEntryMetadata { name: "CustomValidationHeadData", modifier: Optional, ty: Plain(UntrackedSymbol { id: 14, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" A custom head data that should be returned as result of `validate_block`.", "", " See `Pallet::set_custom_validation_head_data` for more information."] }
- Entry: StorageEntryMetadata { name: "CustomValidationHeadData", modifier: Optional, ty: Plain(UntrackedSymbol { id: 14, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" A custom head data that should be returned as result of `validate_block`.", "", " See [`Pallet::set_custom_validation_head_data`] for more information."] }
+ Entry: StorageEntryMetadata { name: "HostConfiguration", modifier: Optional, ty: Plain(UntrackedSymbol { id: 219, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" The parachain host configuration that was obtained from the relay parent.", "", " This field is meant to be updated each block with the validation data inherent. Therefore,", " before processing of the inherent, e.g. in `on_initialize` this data may be stale.", "", " This data is also absent from the genesis."] }
- Entry: StorageEntryMetadata { name: "HostConfiguration", modifier: Optional, ty: Plain(UntrackedSymbol { id: 209, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" The parachain host configuration that was obtained from the relay parent.", "", " This field is meant to be updated each block with the validation data inherent. Therefore,", " before processing of the inherent, e.g. in `on_initialize` this data may be stale.", "", " This data is also absent from the genesis."] }
+ Entry: StorageEntryMetadata { name: "HrmpOutboundMessages", modifier: Default, ty: Plain(UntrackedSymbol { id: 225, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" HRMP messages that were sent in a block.", "", " This will be cleared in `on_initialize` of each new block."] }
- Entry: StorageEntryMetadata { name: "HrmpOutboundMessages", modifier: Default, ty: Plain(UntrackedSymbol { id: 214, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" HRMP messages that were sent in a block.", "", " This will be cleared in `on_initialize` of each new block."] }
+ Entry: StorageEntryMetadata { name: "LastHrmpMqcHeads", modifier: Default, ty: Plain(UntrackedSymbol { id: 222, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" The message queue chain heads we have observed per each channel incoming channel.", "", " This value is loaded before and saved after processing inbound downward messages carried", " by the system inherent."] }
- Entry: StorageEntryMetadata { name: "LastHrmpMqcHeads", modifier: Default, ty: Plain(UntrackedSymbol { id: 211, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" The message queue chain heads we have observed per each channel incoming channel.", "", " This value is loaded before and saved after processing inbound downward messages carried", " by the system inherent."] }
+ Entry: StorageEntryMetadata { name: "PendingValidationCode", modifier: Default, ty: Plain(UntrackedSymbol { id: 14, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" In case of a scheduled upgrade, this storage field contains the validation code to be", " applied.", "", " As soon as the relay chain gives us the go-ahead signal, we will overwrite the", " [`:code`][sp_core::storage::well_known_keys::CODE] which will result the next block process", " with the new validation code. This concludes the upgrade process."] }
- Entry: StorageEntryMetadata { name: "PendingValidationCode", modifier: Default, ty: Plain(UntrackedSymbol { id: 14, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" In case of a scheduled upgrade, this storage field contains the validation code to be applied.", "", " As soon as the relay chain gives us the go-ahead signal, we will overwrite the [`:code`][well_known_keys::CODE]", " which will result the next block process with the new validation code. This concludes the upgrade process.", "", " [well_known_keys::CODE]: sp_core::storage::well_known_keys::CODE"] }
+ Entry: StorageEntryMetadata { name: "RelevantMessagingState", modifier: Optional, ty: Plain(UntrackedSymbol { id: 214, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" The snapshot of some state related to messaging relevant to the current parachain as per", " the relay parent.", "", " This field is meant to be updated each block with the validation data inherent. Therefore,", " before processing of the inherent, e.g. in `on_initialize` this data may be stale.", "", " This data is also absent from the genesis."] }
- Entry: StorageEntryMetadata { name: "RelevantMessagingState", modifier: Optional, ty: Plain(UntrackedSymbol { id: 203, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" The snapshot of some state related to messaging relevant to the current parachain as per", " the relay parent.", "", " This field is meant to be updated each block with the validation data inherent. Therefore,", " before processing of the inherent, e.g. in `on_initialize` this data may be stale.", "", " This data is also absent from the genesis."] }
+ Entry: StorageEntryMetadata { name: "UnincludedSegment", modifier: Default, ty: Plain(UntrackedSymbol { id: 197, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" Latest included block descendants the runtime accepted. In other words, these are", " ancestors of the currently executing block which have not been included in the observed", " relay-chain state.", "", " The segment length is limited by the capacity returned from the [`ConsensusHook`] configured", " in the pallet."] }
+ Entry: StorageEntryMetadata { name: "UpgradeGoAhead", modifier: Default, ty: Plain(UntrackedSymbol { id: 205, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" Optional upgrade go-ahead signal from the relay-chain.", "", " This storage item is a mirror of the corresponding value for the current parachain from the", " relay-chain. This value is ephemeral which means it doesn't hit the storage. This value is", " set after the inherent."] }
+ Entry: StorageEntryMetadata { name: "UpgradeRestrictionSignal", modifier: Default, ty: Plain(UntrackedSymbol { id: 210, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" An option which indicates if the relay-chain restricts signalling a validation code upgrade.", " In other words, if this is `Some` and [`NewValidationCode`] is `Some` then the produced", " candidate will be invalid.", "", " This storage item is a mirror of the corresponding value for the current parachain from the", " relay-chain. This value is ephemeral which means it doesn't hit the storage. This value is", " set after the inherent."] }
- Entry: StorageEntryMetadata { name: "UpgradeRestrictionSignal", modifier: Default, ty: Plain(UntrackedSymbol { id: 199, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" An option which indicates if the relay-chain restricts signalling a validation code upgrade.", " In other words, if this is `Some` and [`NewValidationCode`] is `Some` then the produced", " candidate will be invalid.", "", " This storage item is a mirror of the corresponding value for the current parachain from the", " relay-chain. This value is ephemeral which means it doesn't hit the storage. This value is", " set after the inherent."] }
+ Entry: StorageEntryMetadata { name: "ValidationData", modifier: Optional, ty: Plain(UntrackedSymbol { id: 208, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" The [`PersistedValidationData`] set for this block.", " This value is expected to be set only once per block and it's never stored", " in the trie."] }
- Entry: StorageEntryMetadata { name: "ValidationData", modifier: Optional, ty: Plain(UntrackedSymbol { id: 197, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" The [`PersistedValidationData`] set for this block.", " This value is expected to be set only once per block and it's never stored", " in the trie."] }

Pallet PolkadotXcm
+ Entry: StorageEntryMetadata { name: "LockedFungibles", modifier: Optional, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 0, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 564, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Fungible assets which we know are locked on this chain."] }
- Entry: StorageEntryMetadata { name: "LockedFungibles", modifier: Optional, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 0, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 551, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Fungible assets which we know are locked on this chain."] }
+ Entry: StorageEntryMetadata { name: "RemoteLockedFungibles", modifier: Optional, ty: Map { hashers: [Twox64Concat, Blake2_128Concat, Blake2_128Concat], key: UntrackedSymbol { id: 558, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 560, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Fungible assets which we know are locked on a remote chain."] }
- Entry: StorageEntryMetadata { name: "RemoteLockedFungibles", modifier: Optional, ty: Map { hashers: [Twox64Concat, Blake2_128Concat, Blake2_128Concat], key: UntrackedSymbol { id: 545, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 547, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Fungible assets which we know are locked on a remote chain."] }
+ Entry: StorageEntryMetadata { name: "SupportedVersion", modifier: Optional, ty: Map { hashers: [Twox64Concat, Blake2_128Concat], key: UntrackedSymbol { id: 551, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 4, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" The Latest versions that we know various locations support."] }
- Entry: StorageEntryMetadata { name: "SupportedVersion", modifier: Optional, ty: Map { hashers: [Twox64Concat, Blake2_128Concat], key: UntrackedSymbol { id: 538, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 4, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" The Latest versions that we know various locations support."] }
+ Entry: StorageEntryMetadata { name: "VersionDiscoveryQueue", modifier: Default, ty: Plain(UntrackedSymbol { id: 553, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" Destinations whose latest XCM version we would like to know. Duplicates not allowed, and", " the `u32` counter is the number of times that a send to the destination has been attempted,", " which is used as a prioritization."] }
- Entry: StorageEntryMetadata { name: "VersionDiscoveryQueue", modifier: Default, ty: Plain(UntrackedSymbol { id: 540, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" Destinations whose latest XCM version we would like to know. Duplicates not allowed, and", " the `u32` counter is the number of times that a send to the destination has been attempted,", " which is used as a prioritization."] }
+ Entry: StorageEntryMetadata { name: "VersionNotifiers", modifier: Optional, ty: Map { hashers: [Twox64Concat, Blake2_128Concat], key: UntrackedSymbol { id: 551, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 11, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" All locations that we have requested version notifications from."] }
- Entry: StorageEntryMetadata { name: "VersionNotifiers", modifier: Optional, ty: Map { hashers: [Twox64Concat, Blake2_128Concat], key: UntrackedSymbol { id: 538, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 11, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" All locations that we have requested version notifications from."] }
+ Entry: StorageEntryMetadata { name: "VersionNotifyTargets", modifier: Optional, ty: Map { hashers: [Twox64Concat, Blake2_128Concat], key: UntrackedSymbol { id: 551, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 552, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" The target locations that are subscribed to our version changes, as well as the most recent", " of our versions we informed them of."] }
- Entry: StorageEntryMetadata { name: "VersionNotifyTargets", modifier: Optional, ty: Map { hashers: [Twox64Concat, Blake2_128Concat], key: UntrackedSymbol { id: 538, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 539, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" The target locations that are subscribed to our version changes, as well as the most recent", " of our versions we informed them of."] }

Pallet System
+ Entry: StorageEntryMetadata { name: "Account", modifier: Default, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 0, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 3, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128], docs: [" The full account information for a particular account ID."] }
- Entry: StorageEntryMetadata { name: "Account", modifier: Default, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 0, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 3, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128], docs: [" The full account information for a particular account ID."] }
+ Entry: StorageEntryMetadata { name: "EventTopics", modifier: Default, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 12, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 179, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Mapping between a topic (represented by T::Hash) and a vector of indexes", " of events in the `<Events<T>>` list.", "", " All topic vectors have deterministic storage locations depending on the topic. This", " allows light-clients to leverage the changes trie storage tracking mechanism and", " in case of changes fetch the list of events of interest.", "", " The value has the type `(BlockNumberFor<T>, EventIndex)` because if we used only just", " the `EventIndex` then in case if the topic has the same contents on the next block", " no notification will be triggered thus the event might be lost."] }
- Entry: StorageEntryMetadata { name: "EventTopics", modifier: Default, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 12, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 179, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Mapping between a topic (represented by T::Hash) and a vector of indexes", " of events in the `<Events<T>>` list.", "", " All topic vectors have deterministic storage locations depending on the topic. This", " allows light-clients to leverage the changes trie storage tracking mechanism and", " in case of changes fetch the list of events of interest.", "", " The value has the type `(T::BlockNumber, EventIndex)` because if we used only just", " the `EventIndex` then in case if the topic has the same contents on the next block", " no notification will be triggered thus the event might be lost."] }

Pallet XcmpQueue
+ Entry: StorageEntryMetadata { name: "InboundXcmpMessages", modifier: Default, ty: Map { hashers: [Blake2_128Concat, Twox64Concat], key: UntrackedSymbol { id: 538, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 14, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Inbound aggregate XCMP messages. It can only be one per ParaId/block."] }
- Entry: StorageEntryMetadata { name: "InboundXcmpMessages", modifier: Default, ty: Map { hashers: [Blake2_128Concat, Twox64Concat], key: UntrackedSymbol { id: 525, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 14, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Inbound aggregate XCMP messages. It can only be one per ParaId/block."] }
+ Entry: StorageEntryMetadata { name: "InboundXcmpStatus", modifier: Default, ty: Plain(UntrackedSymbol { id: 532, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" Status of the inbound XCMP channels."] }
- Entry: StorageEntryMetadata { name: "InboundXcmpStatus", modifier: Default, ty: Plain(UntrackedSymbol { id: 519, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" Status of the inbound XCMP channels."] }
+ Entry: StorageEntryMetadata { name: "OutboundXcmpMessages", modifier: Default, ty: Map { hashers: [Blake2_128Concat, Twox64Concat], key: UntrackedSymbol { id: 542, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 14, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" The messages outbound in a given XCMP channel."] }
- Entry: StorageEntryMetadata { name: "OutboundXcmpMessages", modifier: Default, ty: Map { hashers: [Blake2_128Concat, Twox64Concat], key: UntrackedSymbol { id: 529, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 14, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" The messages outbound in a given XCMP channel."] }
+ Entry: StorageEntryMetadata { name: "OutboundXcmpStatus", modifier: Default, ty: Plain(UntrackedSymbol { id: 539, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" The non-empty XCMP channels in order of becoming non-empty, and the index of the first", " and last outbound message. If the two indices are equal, then it indicates an empty", " queue and there must be a non-`Ok` `OutboundStatus`. We assume queues grow no greater", " than 65535 items. Queue indices for normal messages begin at one; zero is reserved in", " case of the need to send a high-priority signal message this block.", " The bool is true if there is a signal message waiting to be sent."] }
- Entry: StorageEntryMetadata { name: "OutboundXcmpStatus", modifier: Default, ty: Plain(UntrackedSymbol { id: 526, marker: PhantomData<fn() -> core::any::TypeId> }), default: [0], docs: [" The non-empty XCMP channels in order of becoming non-empty, and the index of the first", " and last outbound message. If the two indices are equal, then it indicates an empty", " queue and there must be a non-`Ok` `OutboundStatus`. We assume queues grow no greater", " than 65535 items. Queue indices for normal messages begin at one; zero is reserved in", " case of the need to send a high-priority signal message this block.", " The bool is true if there is a signal message waiting to be sent."] }
+ Entry: StorageEntryMetadata { name: "Overweight", modifier: Optional, ty: Map { hashers: [Twox64Concat], key: UntrackedSymbol { id: 11, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 544, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" The messages that exceeded max individual message weight budget.", "", " These message stay in this storage map until they are manually dispatched via", " `service_overweight`."] }
- Entry: StorageEntryMetadata { name: "Overweight", modifier: Optional, ty: Map { hashers: [Twox64Concat], key: UntrackedSymbol { id: 11, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 531, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" The messages that exceeded max individual message weight budget.", "", " These message stay in this storage map until they are manually dispatched via", " `service_overweight`."] }
+ Entry: StorageEntryMetadata { name: "SignalMessages", modifier: Default, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 84, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 14, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Any signal messages waiting to be sent."] }
- Entry: StorageEntryMetadata { name: "SignalMessages", modifier: Default, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 84, marker: PhantomData<fn() -> core::any::TypeId> }, value: UntrackedSymbol { id: 14, marker: PhantomData<fn() -> core::any::TypeId> } }, default: [0], docs: [" Any signal messages waiting to be sent."] }

Copy link

github-actions bot commented Dec 6, 2023

Check 712cb62 darwinia-dev

Check runtime version

RuntimeVersion {
    spec_name: "Darwinia2",
    impl_name: "DarwiniaOfficialRust",
    authoring_version: 0,
-   spec_version: 6501,
+   spec_version: 6510,
    impl_version: 0,
    transaction_version: 0,
    state_version: 0,
}

Check runtime storage

{
fn contains(full_name: &pallet_tx_pause::RuntimeCallNameOf<Runtime>) -> bool {
match (full_name.0.as_slice(), full_name.1.as_slice()) {
(b"System", b"remark_with_event") => true,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any specific reason?

Copy link
Member Author

@boundless-forest boundless-forest Dec 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's used to test that the whitelist is working well. Check out the tx_pause_pause_calls_except_on_whitelist testcase. This remark doesn't concern about the chain security, so I think this is the perfect test example.

Copy link
Member

@aurexav aurexav Dec 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, I know those tests. They are written in common runtime, which means that all runtimes need to be configured like this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed. Do you think this is only needs to be enabled in the testnet? I suggest leaving it as it is now. Later we maybe add some meaningful dipatch call, we can change the testcase later to adapt to that situation.

Copy link
Member

@aurexav aurexav Dec 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That could potentially become a problem in the future. Honestly, I'm not sure if we should whitelist that call for the mainnet. Can we assume some calls are "unconcern" about security?

@aurexav aurexav merged commit 245b850 into main Dec 7, 2023
19 checks passed
@aurexav aurexav deleted the bear-maintain-pallets branch December 7, 2023 06:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

Add maintenance pallets
2 participants