Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Phase 0 Networking Specifications #763
Phase 0 Networking Specifications #763
Changes from 1 commit
e4a1ef1
29caafc
f3bddee
5a9ef0f
22e6212
863f85c
fba333c
2dce326
472d9c5
8794d03
6cc8227
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If other comments are accepted, this enveloping can go away.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like we don't need to specify anything here as everything's already either part of the referenced EIP or multiaddr.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, will remove.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be appropriate to file an EIP to allocate a key for multiaddrs in the pre-defined key/value table in the ENR standard?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc: @fjl
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One other consideration maybe: ENR (and Discovery v5) is being designed to support multiple types of identity. It is not going to be a hard requirement that secp256k1 EC pubkeys will identify the node. ENRs will describe the identity type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
libp2p peer IDs are derived from the public key protobuf, which is just key type + bytes. Here's the spec: libp2p/specs#100. Both SECIO and TLS 1.3 validate peer IDs against the pubkey, so following the spec is important or connections will fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mention in https://github.com/libp2p/specs/pull/100/files#r266291995 - protobuf is not deterministic, and thus not great for feeding into a hashing function or using to determine an ID, unless you used a modified protobuf version that's locked down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't this be handled at the
libp2p
layer? Here we're describing how to construct amultiaddr
from an ENR; the actual handling of themultiaddr
itself and the underlying hash construction would be the responsibiliy oflibp2p
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would, but libp2p itself looks broken in this case - we need to keep an eye on that upstream issue so that we don't spread the breakage further.
Does using ENR require decoding RLP in this context?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add
beacon
somewhere in the protocol path? I think this might be useful to distinguish between shard and beacon RPC commands.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the semantic meaning of these long version numbers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i would imagine it's there because over time it will be bugfixed (bugfix version), updated with a commitment to being backward compatible (minor version), and updated with complete disregard for any backward compatibility, for the sake of progress (major version)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the idea is that they follow semver. In practice I'd estimate that the only time we'd change these version numbers is if there was a backwards-incompatible change to the serialization/compression scheme.
See @raulk's point above re: message envelopes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A version number can either be interpreted or not. If we rely on semver, we should specify the correct behavior for clients: consider client a that supports 1.0.0 - should it also accept 1.0.1 messages as valid automatically, or discard them? This matters for forwards compatibility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Frankly, I'm in favor of simply having integer version numbers and have a blanket statement that sub-protocol version numbers are neither forwards nor backwards compatible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that would be my preference as well, with how the encoding and protocol looks today.
if we had a serialization format that allowed forwards/backwards compatible additions (ie adding fields) we could maybe consider two-level numbering here, where the first number would be the blanket statement, while the second would signal the version with additional fields added, which would still be compatible with previous clients.
Such an encoding is generally a good thing in the wire use case, which would be a reason to look to extensions to SSZ when used outside consensus (a super-set of SSZ for example).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 on integers to signal a generation (generation 1, generation 2...). Any reason you wouldn’t have a varint style bitmap in the HELLO message to communicate finer-grained capabilities? @arnetheduck
I would model serialisation format and compression as part of the protocol ID. Then allow Multistream to negotiate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A possible compatible change could be added message types, so I think minor version numbers could be useful in some cases.
Capabilities are known from the discovery protocol/ENRs already (but we need to define what types of capabilities we need). So I don't think we need it in the HELLO message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@raulk taking a step back, my initial understanding of the libp2p setup was that you would negotiate capabilities with discovery mainly and connect to clients you know you have common ground with - then merely verify the support by signing up to the various streams here and that each stream would be a protocol on its with libp2p dealing with the mulitplexing etc - that has changed now I see, and it looks like there's another layer of protocol negotiation within the stream to discover capabilities - that feels.. redundant, to do the same work twice, and somewhat limiting, because how does a client add a completely new message they want to test or use in some client-specific scenario (for example to establish / evaluate its usefulness) - but it seems I need to reread the newer spec.
I'll be honest though and say that I don't fully understand where the varint would go at this point with the various layers, but integers tend to be harder to negotiate than strings, in a decentralized manner - a string, you just pick one and start using it - if it becomes popular, people will avoid it. Numbers.. you need a registry and the associated maintenance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@arnetheduck my intention isn't to propose any changes here; was just curious to hear the rationale of the Eth2.0 community re: finer-grained protocols vs. coarse-grained protocol with capabilities. We also debate this occasionally in the libp2p community ;-) [libp2p supports both patterns].
Re: semver vs. protocol generations. libp2p does not impose a specific version token (if any). When registering a handler, you can attach a predicate to evaluate the match. So handler X could match N versions, and when receiving the callback, you can inspect the protocol pinned on the stream to infer which abilities you activate, etc.
We've traditionally used semver it in libp2p, IPFS, etc., but a few of us are not convinced of its aptness. Semver is good to convey the severity of changes in APIs, but protocol evolution is a different beast.
You generally strive to keep things backwards compatible, yet enable feature upgrades/opt-ins over time that may not be rolling cumulative, e.g. if 1.14.0 and 1.15.0 introduce feature X and Y respectively, how do I convey that a given peer supports Y but not X?
That's where protocol granularity comes into play: potentially make each feature/message/RPC a different protocol, and track "generations/revisions" of those protocols. libp2p supports that design. A few thoughts:
I meant replacing semver by protocol generations, e.g.
/eth/serenity/rpc/v10
. Sorry for not being clear!There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least with SSZ it's not easily possible to distinguish between normal and error responses, as one needs to know the schema before being able to decode the message. What one could do is have a general response format and then an embedded result/error blob that can be decoded in a second step. E.g.:
Not really elegant, but I don't really see a better solution (for SSZ that is).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, this is a good point. SSZ doesn't support
null
values either - let me think on this one for a little bit and come up with a solution.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added an
is_error
boolean field. Note that with SSZ at least you can read theis_error
field prior to the contents of theresult
via offsets. This allows clients to switch the deserialized type based on theis_error
value.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the alternative would be to use a list - empty if there's no error, and one item if there is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just to be clear - when encoding or decoding ssz, there generally exists no provision for skipping fields - even if
is_error
is false,data
must contain bytes. embedding aStatusData
in the data field seems to go against the spirit of SSZ generally, as SSZ decoders in general expect to know the exact type of each field, thus would not fit "naturally" in "normal" ssz code.That said, this issue stems from using SSZ in a wire protocol setting for which it is not.. great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the error codes, this seems very useful (e.g. for
block not found
or something). Not sure about the examples below though, shouldn't0
,10
, and20
just result in a disconnect?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sort of like port numbers :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taking into account the backwards sync suggested elsewhere, and that we can use attestations as a (strong) heuristic that a block is valid and useful, it seems prudent to include (some) attestations here - instead of simply supplying some data like best_root that cannot be trusted anyway, a recent attestation would help the connecting client both with head / fork selection and to know with a higher degree of certainty that the root sent "makes sense" and should be downloaded.
The details of this are TBD - but probably we're looking at something like
attestations: [Attestation]
where it's up to the client to choose a representative and recent set (or none, which is also fine, because then one can listen to broadcasts).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
slots are based on wall time - what's the best_slot field for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pretty sure this is supposed to refer to the slot of the head block. Maybe rename
best_root
andbest_slot
tohead_root
andhead_slot
(or to be even more clearhead_block_root/slot
)?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think
head_block_root
andhead_slot
would be clearerThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should consider spelling out network ID and chain ID as separate fields. Chain ID should be set to a fixed number "1" for ETH, and if others want to run their own chain they can change that ID.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NetworkId vs ChainId +1.
Also, message body compression algorithm indicator.
Also, upgrade paths for SSZ (I get the feeling this might change on the wire)..maybe a sorted list of serialization method preferences, the highest mutual being selected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still not convinced that we actually need a network id at all and not only a chain id. Especially for RPC as arguably this isn't even a network, just a set of bidirectional connections (as opposed to the gossip layer where we actually relay data).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe clarify that this is (because it can only be) checked by the peer with the higher latest finalized epoch. I tried to come up with a one sentence fix, but it's probably better, to rewrite the whole paragraph from the point of view of one node shaking hands with another node (right now it's talking about both at the same time).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool got it, will do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some more from the top of my head that might be helpful:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't we still be connected after
sync finished
? We would still need to propagate any newly proposed blocks to our peersThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
generally, the standard way to sync in these kinds of "live" protocols is to start listening to broadcasts, then initiate sync.. else you'll miss packets during sync and will have to recover again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This deals with the beacon chain only, so there are no shards. I think we should have a completely separate protocol and separate connections for shard networks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need the root? It seems redundant to me, except for the case of chain reorgs which shouldn't happen frequently at sync (and even then, it's probably better to get blocks from the current chain that we'll be able to use later, instead of getting outdated ones).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ẁe need a mechanism for recovering blocks, in case something is lost or the client goes offline for a short bit and loses a few (computer went to sleep / ISP went down for 10 minutes).
I argue in the original issue (#692 (comment)) that it's often natural to request blocks backwards for this reason: the data structure we're syncing is a singly linked list pointing backwards in time and we receive attestations and blocks that let us discover heads "naturally" by listening to the broadcasts. With a
block_root+previous_n_blocks
kind of request we can both sync and recover, and for example use attestations to discover "viable" heads to work on, from a sync or recovery perspective. Indeed, negotiating finalized epochs in the handshake is somewhat redundant in that case, albeit a nice optimization (except for the chain id) - we could equally well request blocks from the peer that gossiped us the block or attestation whose parent we're missing - they should not be gossiping attestations they have not linked to a finalized epoch of value.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting! To summarize my understanding of your comment: Syncing forwards is safer as we can verify each block immediately when we receive it, but syncing backwards is more efficient/doesn't require additional database indexing (and I guess syncing forwards may require a negotiating phase to discover the best shared block). You're proposing to interpret the fact that I see lots of attestations on top of my sync peer's head flying around the network as evidence that their head is valid? And therefore, I'd be pretty safe syncing backwards?
That sounds reasonable. My original concern was that this requires me to know (at least some good fraction of) the validator set as otherwise my sync peer could create lots of fraudulent attestations for free that I have no chance of verifying. But I would notice this if I have at least one single honest peer (if I try to sync from them or compare the attestations coming from them).
Do you think having only a backwards sync is fine or do we need both (e.g. for highly adversarial environments, or resource constrained devices that don't participate in gossiping?).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In terms of network / bandwidth, I'd say it's about the same but there are some nuances:
(latest-head, known_slot_number)
request ("give me the block you consider to be the head, and history back to slot N") to alleviate this race, but then the server selects the head.In terms of client implementations, I think of backward sync as biased to make it cheaper for the server: the server already has the data necessary - also because the head is kept hot - while the client has to keep a chain of "unknown" blocks around / can't validate eagerly. An additional rule that the response must be forward-ordered could help the client apply / validate the blocks eagerly.
The backwards sync can be seen as more passive/reactive/lazy while forward sync is more active..
right. the assumption rests on several premises (thanks @djrtwo!):
I'm not sure :) I'm curious to hear feedback on this point, but here are some thoughts:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems reasonable to sync backwards from the latest received gossiped block (at least as an initial implementation)
Do we really need start_slot? if we give clients the option to request a block by either start_slot or start_root then that forces us to maintain a lookup or search mechanism for both. if we are saying that both fields (start_slot and start_root) required to sync, then I would disagree. we should be able to simply perform a lookup by block_root and walk the chain backwards until we reach max_headers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or even better, latest gossiped attestation
I would say that if we go with backwards sync, we should not implement forwards sync here or elsewhere unless there's a strong case for that direction. Having to implement both directions negates some of the benefits of backward sync and adds implementation surface.
It is quite possible to add forward sync in a later version of the protocol as well should it prove necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can dig that
@arnetheduck or anyone else. Why do we need start_slot
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I note BlockHeader is not defined in the beacon chain spec. I opened a PR to define it as a struct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's not needed specifically for the spec, we could also just define it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just like in LES, I would use different method ids for requests and responses. So it's possible for me to send you proactively blocks and headers using RPC, and you don't need to know about it in advance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to me that when everything is going smoothly, block bodies consist of very few attestations (they should be combined by then), and a few minor items like the transfers etc. has anything looked at the numbers to see how much value there is in having separate requests for headers and bodies? Requesting headers then bodies creates additional round-trips which are a cost on its own.