-
Notifications
You must be signed in to change notification settings - Fork 315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
staking: remove ad-hoc fixpoint arithmetic #3640
Conversation
eea1db8
to
d3f0176
Compare
aabb7b7
to
755610d
Compare
34c21b8
to
c2af71b
Compare
3d4dbbb
to
4aec7a4
Compare
AFK, but hypothesis for why the smoke test is failing, the fixes to the reward amount calculation and to the block-based issuance of the distribution component means that we have actual dynamic chain base rate, and therefore dynamic validator exchange rates. This means that the amount of delegation tokens minted for a delegation aren't done at a rate of 1:1 anymore, which would make the smoke test fail. This could also be a bug. |
The smoke test absolutely shouldn't be testing the exact amounts, its purpose is to exercise the commands end to end, not to unit test the arithmetic. We should just delete those checks and have unit tests. |
Previously, the distribution component would issue a budget that was _not_ denominated in the staking token unit. This means that the staking component would have to do some ad-hoc conversion in order to use it. This is confusing (in fact, it caused a bug). Instead of doing that, we just track the issuance budget and apply the staking token denom unit amount.
Part of an effort to systematize how we perform rate calculations, hopefully making things a little less hairy for the next person. First, we strip the scaling factors. This gives us rates between zero and one. Those rates are applied to `Amount`s of staking tokens or delegation pool tokens. If the calculation is a rate itself (e.g. `RateData::next_epoch`) we re-apply the scaling factor (`FP_SCALING_FACTOR` i.e. 1e8) so that we can store the rate in an `Amount` in our public interfaces.
I have add many noisy debug statements as part of my testing phase. Recording it here for future reference (possibly). Something that came up up during testing is that a good way to sanity check rate calculations is to spin-up a testnet with a single validator with commission 100% and a single funding stream. Then, we tune the epoch duration and make sure that no bad conditions are hit (e.g. divide by zero), and that the amount of reward piped to the validator corresponds exactly to the issuance budget for that epoch. This hints to a testing structure we could adopt.
This was a holdover from a previous time-based mechanism push. Previously, rewards were computed on the first block of a new epoch. This meant that to compute the rewards for the previous epoch, one had to use the corresponding old rate data. This is why this method was taking both the previous and current rate data as parameters. Moreover,
d4cf61b
to
b2a010b
Compare
Ugh. That was it, merging to not block work on funding streams and modularization. |
In #3640, we added two helper methods: - `increase_token_supply` - `decrease_token_supply` These methods are somewhat similar, and objectively are more verbose than a single `update_token_supply` method. However, it seems an improvement to not have to do any cognitive work to piece together whether a method call increases or decreases the token supply. Another advantage of this approach is that one can easily use rust-analyzer to filter for places that increase the token supply vs. places where it is being shrinked.
In #3640, we added two helper methods: - `increase_token_supply` - `decrease_token_supply` These methods are somewhat similar, and objectively are more verbose than a single `update_token_supply` method. However, it seems an improvement to not have to do any cognitive work to piece together whether a method call increases or decreases the token supply. Another advantage of this approach is that one can easily use rust-analyzer to filter for places that increase the token supply vs. places where it is being shrinked.
In #3640, we added two helper methods: - `increase_token_supply` - `decrease_token_supply` These methods are somewhat similar, and objectively are more verbose than a single `update_token_supply` method. However, it seems an improvement to not have to do any cognitive work to piece together whether a method call increases or decreases the token supply. Another advantage of this approach is that one can easily use rust-analyzer to filter for places that increase the token supply vs. places where it is being shrinked.
Close #3453, this PR systematize the rate calculation process:
Amount
s with an implicitpenumbra_fixnum::U128x128
In addition to that, this PR: