forked from tendermint/tendermint
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
set_readonly RPC is broken #3
Labels
bug
Something isn't working
Comments
Investigating... |
Linking tendermint#3341 |
pinkiebell
added a commit
that referenced
this issue
Apr 13, 2019
Consensus stalls if there are no other validators in the network and the validator is switched from readOnly=true back to readOnly=false. Fixes #3
This was referenced Apr 13, 2019
pinkiebell
pushed a commit
that referenced
this issue
Jun 14, 2019
…lices (tendermint#2611) (tendermint#3530) (tendermint#2611) had suggested that an iterative version of SimpleHashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large. Provided here is an iterative alternative, a simple test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference: ``` BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op ``` On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single `[][]byte` slices which it then re-slices at each level of the tree. The iterative version reproduces `[][]byte` once within the function and then rewrites sub-slices of that array at each level of the tree. Eexperimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance. These preliminary results suggest: 1. The performance of the current implementation is pretty good 2. Go has low overhead for recursive functions 3. The performance of the SimpleHashFromByteSlice routine is dominated by the actual hashing of data Although this work is in no way exhaustive, point #3 suggests that optimizations of this routine would need to take an alternative approach to make significant improvements on the current performance. Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit. * re-add slice re-writing * [crypto] Document SimpleHashFromByteSlicesIterative
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
to reproduce:
--readonly --unsafeRpc
curl 'http://localhost:26659/set_readonly?readonly=false'
Side note: this flow was working before the latest upgrade
The text was updated successfully, but these errors were encountered: