-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize ReadCacheLookup
, 6.7 speed-up
#517
Conversation
a464e4d
to
8671a04
Compare
Pull Request Test Coverage Report for Build 12546252319Details
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. I'd be curious to know if the u8
-> bool
change does anything. But it seems at worst harmless.
There are still some worst-case scenarios where this isn't fast enough to serialize a full block. I'm working on a more ambitions PR to speed this up more. |
|
b1dcddd
to
a6df51e
Compare
fbb6236
to
11e100e
Compare
11e100e
to
32bed63
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good
This is a major hot-spot when serializing with back references. This patch speeds it up by:
IdentityHash
, to avoid hashing the tree-hashes againBitVec
instead ofVec<u8>
to store paths (which only contain 0 and 1 anyway).Compared to
main
, the relevant benchmarks indicate a time reduction of between 80-90%, of serializing clvm with back references.One concern is that, not mixing in any random state in the hasher, opens up for hash-collision attacks. I'm not sure if it's worth addressing or what the best way would be. Maybe add a random value to the
IdentityHash
object, and add it to the hash value.