-
Notifications
You must be signed in to change notification settings - Fork 805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deneb merge from unstable 20230911 #4719
Deneb merge from unstable 20230911 #4719
Conversation
## Issue Addressed updates underlying dependencies and removes the ignored `RUSTSEC`'s for `cargo audit`. Also switches `procinfo` to `procfs` on `eth2` to remove the `nom` warning, `procinfo` is unmaintained see [here](danburkert/procinfo-rs#46).
## Issue Addressed Closes sigp#4473 (take 3) ## Proposed Changes - Send a 202 status code by default for duplicate blocks, instead of 400. This conveys to the caller that the block was published, but makes no guarantees about its validity. Block relays can count this as a success or a failure as they wish. - For users wanting finer-grained control over which status is returned for duplicates, a flag `--http-duplicate-block-status` can be used to adjust the behaviour. A 400 status can be supplied to restore the old (spec-compliant) behaviour, or a 200 status can be used to silence VCs that warn loudly for non-200 codes (e.g. Lighthouse prior to v4.4.0). - Update the Lighthouse VC to gracefully handle success codes other than 200. The info message isn't the nicest thing to read, but it covers all bases and isn't a nasty `ERRO`/`CRIT` that will wake anyone up. ## Additional Info I'm planning to raise a PR to `beacon-APIs` to specify that clients may return 202 for duplicate blocks. Really it would be nice to use some 2xx code that _isn't_ the same as the code for "published but invalid". I think unfortunately there aren't any suitable codes, and maybe the best fit is `409 CONFLICT`. Given that we need to fix this promptly for our release, I think using the 202 code temporarily with configuration strikes a nice compromise.
## Issue Addressed sigp#4654 ## Proposed Changes Only log error if we're unable to read slot clock after genesis. I thought about simply down grading the `error` to a `warn`, but feel like it's still unnecessary noise before genesis, and it would be good to retain error log if we're pass genesis. But I'd be ok with just downgrading the log level, too.
## Issue Addressed `web3signer_tests` can sometimes timeout. ## Proposed Changes Increase the `web3signer_tests` timeout from 20s to 30s ## Additional Info Previously I believed the consistent CI failures were due to this, but it ended up being something different. See below: --- The timing of this makes it very likely it is related to the [latest release of `web3-signer`](https://github.com/Consensys/web3signer/releases/tag/23.8.1). I now believe this is due to an out of date Java runtime on our runners. A newer version of Java became a requirement with the new `web3-signer` release. However, I was getting timeouts locally, which implies that the margin before timeout is quite small at 20s so bumping it up to 30s could be a good idea regardless.
## Issue Addressed NA ## Proposed Changes Add the Holesky network config as per https://github.com/eth-clients/holesky/tree/36e4ff2d5138dcb2eb614f0f60fdb060b2adc1e2/custom_config_data. Since the genesis state is ~190MB, I've opted to *not* include it in the binary and instead download it at runtime (see sigp#4564 for context). To download this file we have: - A hard-coded URL for a SigP-hosted S3 bucket with the Holesky genesis state. Assuming this download works correctly, users will be none the wiser that the state wasn't included in the binary (apart from some additional logs) - If the user provides a `--checkpoint-sync-url` flag, then LH will download the genesis state from that server rather than our S3 bucket. - If the user provides a `--genesis-state-url` flag, then LH will download the genesis state from that server regardless of the S3 bucket or `--checkpoint-sync-url` flag. - Whenever a genesis state is downloaded it is checked against a checksum baked into the binary. - A genesis state will never be downloaded if it's already included in the binary. - There is a `--genesis-state-url-timeout` flag to tweak the timeout for downloading the genesis state file. ## Log Output Example of log output when a state is downloaded: ```bash Aug 23 05:40:13.424 INFO Logging to file path: "/Users/paul/.lighthouse/holesky/beacon/logs/beacon.log" Aug 23 05:40:13.425 INFO Lighthouse started version: Lighthouse/v4.3.0-bd9931f+ Aug 23 05:40:13.425 INFO Configured for network name: holesky Aug 23 05:40:13.426 INFO Data directory initialised datadir: /Users/paul/.lighthouse/holesky Aug 23 05:40:13.427 INFO Deposit contract address: 0x4242424242424242424242424242424242424242, deploy_block: 0 Aug 23 05:40:13.427 INFO Downloading genesis state info: this may take some time on testnets with large validator counts, timeout: 60s, server: https://sigp-public-genesis-states.s3.ap-southeast-2.amazonaws.com/ Aug 23 05:40:29.895 INFO Starting from known genesis state service: beacon ``` Example of log output when there are no URLs specified: ``` Aug 23 06:29:51.645 INFO Logging to file path: "/Users/paul/.lighthouse/goerli/beacon/logs/beacon.log" Aug 23 06:29:51.646 INFO Lighthouse started version: Lighthouse/v4.3.0-666a39c+ Aug 23 06:29:51.646 INFO Configured for network name: goerli Aug 23 06:29:51.647 INFO Data directory initialised datadir: /Users/paul/.lighthouse/goerli Aug 23 06:29:51.647 INFO Deposit contract address: 0xff50ed3d0ec03ac01d4c79aad74928bff48a7b2b, deploy_block: 4367322 The genesis state is not present in the binary and there are no known download URLs. Please use --checkpoint-sync-url or --genesis-state-url. ``` ## Additional Info I tested the `--genesis-state-url` flag with all 9 Goerli checkpoint sync servers on https://eth-clients.github.io/checkpoint-sync-endpoints/ and they all worked 🎉 My IDE eagerly formatted some `Cargo.toml`. I've disabled it but I don't see the value in spending time reverting the changes that are already there. I also added the `GenesisStateBytes` enum to avoid an unnecessary clone on the genesis state bytes baked into the binary. This is not a huge deal on Mainnet, but will become more relevant when testing with big genesis states. When we do a fresh checkpoint sync we're downloading the genesis state to check the `genesis_validators_root` against the finalised state we receive. This is not *entirely* pointless, since we verify the checksum when we download the genesis state so we are actually guaranteeing that the finalised state is on the same network. There might be a smarter/less-download-y way to go about this, but I've run out of cycles to figure that out. Perhaps we can grab it in the next release?
## Issue Addressed Fix a bug in the storage of the linear block roots array in the freezer DB. Previously this array was always written as part of state storage (or block backfill). With state pruning enabled by sigp#4610, these states were no longer being written and as a result neither were the block roots. The impact is quite low, we would just log an error when trying to forwards-iterate the block roots, which for validating nodes only happens when they try to look up blocks for peers: > Aug 25 03:42:36.980 ERRO Missing chunk in forwards iterator chunk index: 49726, service: freezer_db Any node checkpoint synced off `unstable` is affected and has a corrupt database. If you see the log above, you need to re-sync with the fix. Nodes that haven't checkpoint synced recently should _not_ be corrupted, even if they ran the buggy version. ## Proposed Changes - Use a `ChunkWriter` to write the block roots when states are not being stored. - Tweak the usage of `get_latest_restore_point` so that it doesn't return a nonsense value when state pruning is enabled. - Tweak the guarantee on the block roots array so that block roots are assumed available up to the split slot (exclusive). This is a bit nicer than relying on anything to do with the latest restore point, which is a nonsensical concept when there aren't any restore points. ## Additional Info I'm looking forward to deleting the chunked vector code for good when we merge tree-states :grin:
## Issue Addressed N/A ## Proposed Changes Adds the Chiado (Gnosis testnet) network to the builtin one. ## Additional Info It's a fairly trivial change all things considered as the preset already exists, so shouldn't be hard to maintain. It compiles and seems to work, but I'm sure I missed something? Co-authored-by: Paul Hauner <paul@paulhauner.com>
## Issue Addressed NA ## Proposed Changes Bump versions from `v4.3.0` to `v4.4.0`. ## Additional Info NA
## Issue Addressed Fix a deadlock introduced in sigp#4236 which was caught during the v4.4.0 release testing cycle (with thanks to @paulhauner and `gdb`). ## Proposed Changes Avoid re-locking the fork choice read lock when querying a state by root in the HTTP API. This avoids a deadlock due to the lock already being held. ## Additional Info The [RwLock docs](https://docs.rs/lock_api/latest/lock_api/struct.RwLock.html#method.read) explicitly advise against re-locking: > Note that attempts to recursively acquire a read lock on a RwLock when the current thread already holds one may result in a deadlock.
## Proposed Changes New release to replace the cancelled v4.4.0 release. This release includes the bugfix sigp#4687 which avoids a deadlock that was present in v4.4.0. ## Additional Info Awaiting testing over the weekend this will be merged Monday September 4th.
Correct the formatting and remove `http-port 5062` to make the command simpler Co-authored-by: chonghe <44791194+chong-he@users.noreply.github.com>
## Issue Addressed N/A ## Proposed Changes The current Advanced Networking page references the ["--listen-addresses"](https://github.com/sigp/lighthouse/blob/14924dbc955732a742d52eafcad32cb0f790f027/book/src/advanced_networking.md?plain=1#L124C8-L124C8) argument, which does not exist in the beacon node. This PR changes such instances of "--listen-addresses" to "--listen-address". Additionally, the page mentions using sockets that [both listen to IPv6](https://github.com/sigp/lighthouse/blob/14924dbc955732a742d52eafcad32cb0f790f027/book/src/advanced_networking.md?plain=1#L151) in a dual-stack setup? Hence, this PR also changes said line to "using one socket for IPv4 and another socket for IPv6". ## Additional Info None.
## Issue Addressed Siren FAQ requires more information regarding network connections to lighthouse BN/VC ## Proposed Changes Added more info regarding port, BN/VC flags, ssh tunneling and VPNs access
…he (sigp#4691) ## Issue Addressed I noticed a node.js version warning on our CI, and thought about updating the version to get rid of this warning, but then realized we may not need node.js anymore now we're using `anvil` instead of `ganache`. > The following actions uses node12 which is deprecated and will be forced to run on node16: actions/setup-node@v2. For more info: https://github.blog/changelog/2023-06-13-github-actions-all-actions-will-run-on-node16-instead-of-node12-by-default/
@@ -596,9 +596,12 @@ impl<T: SlotClock + 'static, E: EthSpec> BlockService<T, E> { | |||
} | |||
|
|||
async fn publish_signed_block_contents<Payload: AbstractExecPayload<E>>( | |||
&self, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed this to a method so it has access to log
.
builder.trusted_setup(trusted_setup) | ||
} else { | ||
builder | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved this to the bottom of this method, as config needs to be used so we can't move trusted_setup yet.
beacon_chain_builder.trusted_setup(trusted_setup) | ||
} else { | ||
beacon_chain_builder | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nicer if we don't have to do multiple variable shadowing here, but would probably require changing our builder methods to use mutable references to self
instead of taking ownership (self
). I'll leave it out for now to retain the scope of this PR.
e.g.
pub fn trusted_setup(&mut self, trusted_setup: TrustedSetup) -> &mut Self {
self.trusted_setup = Some(trusted_setup);
self
}
a670ccf
to
6332d58
Compare
signature, | ||
_phantom: PhantomData, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved to Sidecar
trait, so we can sign over blinded blobs as well in tests.
6332d58
to
a19fb23
Compare
a19fb23
to
d4aedab
Compare
let (gossip_verified_block, gossip_verified_blobs) = | ||
match block_contents.into_gossip_verified_block(&chain) { | ||
Ok(b) => b, | ||
Err(BlockContentsError::BlockError(BlockError::BlockIsAlreadyKnown)) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@michaelsproul do we need to handle the RepeatBlob
error in the same way? not sure if this is just meant to cover multiple repeat proposals through the same BN or if we care about gossip caches
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@michaelsproul mentioned in this comment that it's meant to cover this scenario:
One of the relays also hit the error with full blocks when broadcasting to multiple BNs, and it can happen for VCs that broadcast too (e.g. Vouch).
so I guess the same would apply to blobs too - if we already observed the corresponding blob sidecar (before the block arrives) then it would be considered a duplicate proposal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My brain has too many other things going on at the moment to think about this properly, so I just opened an issue: #4725. I think we should merge this PR and come back to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You da man Jimmy! Just had one Q about how the duplicate status code logic maps over to blobs (if it does)
Issue Addressed
Merge latest
unstable
into deneb.Resolved conflicts: