-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Conversation
Think I found some other things that could be eliminated - inprogress |
I thought of maybe using the fact that extrinsics are executed in order => maybe replacing |
The last update of this PR - I'm adding results for some 'average' case, where every block:
With CT enabled, import of block takes ~0.00399s. When digests are created every 512 blocks, it takes ~0.35114s to import that block. When digests are created every 1024 blocks, it takes ~0.69611 to import that block. As some conclusion, I think that before enabling CT for the testnet:
|
how would this work? |
Something like that, but I have no details for this yet: (1) count number of keys that have to be covered by next top-level CT digest (2) when this crosses limit => create digest CT at next block and emit signal (digest item) that we have created top-level digest CT before schedule (3) reset number of keys to zero and go to (1). Configuration change is also should be announced to light clients using special digest item. |
Most probably "number of keys" -> "number of changes tries' input pairs" that should be read to build top level digest block. Because it is impossible to have |
ok - makes sense. might it be possible to do incremental CT builds to spread the cost over all the blocks rather than have a single block that takes ages? |
I've considered this in the very beginning - i.e. you know that next digest trie will be built at block #1024 => when tries for previous 1023 blocks are being built, you also update trie for block#1024 => when you reach #1024 you actually have an almost-complete digest trie. But then I have thought about forks - i.e. when you have forked at #500 and then fork1 also forks at #700 then you'll have 4 potential future digest tries that have to be updated on every block. It seems too heavy (when thinking about performance) + complex (when thinking about code) to maintain. Dropped idea because of that. |
ok |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good, although I do not understand this code very well.
I'd like to enable changes tries in default (staging) chain specification in nearest future so that out next testnet will have them enabled. But beforehand there should be some statistics collected to choose appropriate defaults for
digest_interval
anddigest_levels
. I've tested several configurations already to estimate maximal number of changes that could be covered by top-level digests without significant performance drops. Some rough measurements (that were made without changes in this PR) are at the end of description (fwiw).What I've found during testing is that there are few easy-to-implement optimizations. This PR cuts block-with-CT (changes trie) import time by ~30% by:
build.rs
;HashSet
toBTreeSet
inOverlayedValue::extrinsics
because we had to sort this (by converting toBTreeSet
) anyway;trie_root(input_pairs)
and then actually building CT (to store in DB), we're now building CT (the root is computed there anyway).I have another optimization implemented - in-memory cache that holds data required to build digest tries (instead of reading lower-lever CT from the DB). But: (1) I'm not sure about the case when it actually significantly helps - need to confirm that yet (2) this should be a separate PR.
==========================================================================
In the chain where at every block we're changing 1000 different keys:
In the chain where at every block we're changing 5000 different keys: