Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

set_readonly RPC is broken #3

Closed
troggy opened this issue Apr 12, 2019 · 2 comments · Fixed by #4
Closed

set_readonly RPC is broken #3

troggy opened this issue Apr 12, 2019 · 2 comments · Fixed by #4
Labels
bug Something isn't working

Comments

@troggy
Copy link
Member

troggy commented Apr 12, 2019

to reproduce:

  • start the validator with --readonly --unsafeRpc
  • call curl 'http://localhost:26659/set_readonly?readonly=false'
  • it should start producing blocks, but it is not

Side note: this flow was working before the latest upgrade

@troggy troggy added the bug Something isn't working label Apr 12, 2019
@pinkiebell
Copy link

Investigating...

@pinkiebell
Copy link

Linking tendermint#3341

pinkiebell added a commit that referenced this issue Apr 13, 2019
Consensus stalls if there are no other validators in the network and
the validator is switched from readOnly=true back to readOnly=false.

Fixes #3
@troggy troggy closed this as completed in #4 Apr 18, 2019
pinkiebell pushed a commit that referenced this issue Jun 14, 2019
…lices (tendermint#2611) (tendermint#3530)

(tendermint#2611) had suggested that an iterative version of
SimpleHashFromByteSlice would be faster, presumably because
 we can envision some overhead accumulating from stack
frames and function calls. Additionally, a recursive algorithm risks
hitting the stack limit and causing a stack overflow should the tree
be too large.

Provided here is an iterative alternative, a simple test to assert
correctness and a benchmark. On the performance side, there appears to
be no overall difference:

```
BenchmarkSimpleHashAlternatives/recursive-4                20000 77677 ns/op
BenchmarkSimpleHashAlternatives/iterative-4                20000 76802 ns/op
```

On the surface it might seem that the additional overhead is due to
the different allocation patterns of the implementations. The recursive
version uses a single `[][]byte` slices which it then re-slices at each level of the tree.
The iterative version reproduces `[][]byte` once within the function and
then rewrites sub-slices of that array at each level of the tree.

Eexperimenting by modifying the code to simply calculate the
hash and not store the result show little to no difference in performance.

These preliminary results suggest:
1. The performance of the current implementation is pretty good
2. Go has low overhead for recursive functions
3. The performance of the SimpleHashFromByteSlice routine is dominated
by the actual hashing of data

Although this work is in no way exhaustive, point #3 suggests that
optimizations of this routine would need to take an alternative
approach to make significant improvements on the current performance.

Finally, considering that the recursive implementation is easier to
read, it might not be worthwhile to switch to a less intuitive
implementation for so little benefit.

* re-add slice re-writing
* [crypto] Document SimpleHashFromByteSlicesIterative
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants