Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Phase 0 Networking Specifications #763
Phase 0 Networking Specifications #763
Changes from 7 commits
e4a1ef1
29caafc
f3bddee
5a9ef0f
22e6212
863f85c
fba333c
2dce326
472d9c5
8794d03
6cc8227
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If other comments are accepted, this enveloping can go away.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like we don't need to specify anything here as everything's already either part of the referenced EIP or multiaddr.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, will remove.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be appropriate to file an EIP to allocate a key for multiaddrs in the pre-defined key/value table in the ENR standard?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc: @fjl
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One other consideration maybe: ENR (and Discovery v5) is being designed to support multiple types of identity. It is not going to be a hard requirement that secp256k1 EC pubkeys will identify the node. ENRs will describe the identity type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
libp2p peer IDs are derived from the public key protobuf, which is just key type + bytes. Here's the spec: libp2p/specs#100. Both SECIO and TLS 1.3 validate peer IDs against the pubkey, so following the spec is important or connections will fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mention in https://github.com/libp2p/specs/pull/100/files#r266291995 - protobuf is not deterministic, and thus not great for feeding into a hashing function or using to determine an ID, unless you used a modified protobuf version that's locked down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't this be handled at the
libp2p
layer? Here we're describing how to construct amultiaddr
from an ENR; the actual handling of themultiaddr
itself and the underlying hash construction would be the responsibiliy oflibp2p
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would, but libp2p itself looks broken in this case - we need to keep an eye on that upstream issue so that we don't spread the breakage further.
Does using ENR require decoding RLP in this context?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least with SSZ it's not easily possible to distinguish between normal and error responses, as one needs to know the schema before being able to decode the message. What one could do is have a general response format and then an embedded result/error blob that can be decoded in a second step. E.g.:
Not really elegant, but I don't really see a better solution (for SSZ that is).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, this is a good point. SSZ doesn't support
null
values either - let me think on this one for a little bit and come up with a solution.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added an
is_error
boolean field. Note that with SSZ at least you can read theis_error
field prior to the contents of theresult
via offsets. This allows clients to switch the deserialized type based on theis_error
value.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the alternative would be to use a list - empty if there's no error, and one item if there is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just to be clear - when encoding or decoding ssz, there generally exists no provision for skipping fields - even if
is_error
is false,data
must contain bytes. embedding aStatusData
in the data field seems to go against the spirit of SSZ generally, as SSZ decoders in general expect to know the exact type of each field, thus would not fit "naturally" in "normal" ssz code.That said, this issue stems from using SSZ in a wire protocol setting for which it is not.. great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the error codes, this seems very useful (e.g. for
block not found
or something). Not sure about the examples below though, shouldn't0
,10
, and20
just result in a disconnect?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sort of like port numbers :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taking into account the backwards sync suggested elsewhere, and that we can use attestations as a (strong) heuristic that a block is valid and useful, it seems prudent to include (some) attestations here - instead of simply supplying some data like best_root that cannot be trusted anyway, a recent attestation would help the connecting client both with head / fork selection and to know with a higher degree of certainty that the root sent "makes sense" and should be downloaded.
The details of this are TBD - but probably we're looking at something like
attestations: [Attestation]
where it's up to the client to choose a representative and recent set (or none, which is also fine, because then one can listen to broadcasts).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
slots are based on wall time - what's the best_slot field for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pretty sure this is supposed to refer to the slot of the head block. Maybe rename
best_root
andbest_slot
tohead_root
andhead_slot
(or to be even more clearhead_block_root/slot
)?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think
head_block_root
andhead_slot
would be clearerThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should consider spelling out network ID and chain ID as separate fields. Chain ID should be set to a fixed number "1" for ETH, and if others want to run their own chain they can change that ID.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NetworkId vs ChainId +1.
Also, message body compression algorithm indicator.
Also, upgrade paths for SSZ (I get the feeling this might change on the wire)..maybe a sorted list of serialization method preferences, the highest mutual being selected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still not convinced that we actually need a network id at all and not only a chain id. Especially for RPC as arguably this isn't even a network, just a set of bidirectional connections (as opposed to the gossip layer where we actually relay data).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe clarify that this is (because it can only be) checked by the peer with the higher latest finalized epoch. I tried to come up with a one sentence fix, but it's probably better, to rewrite the whole paragraph from the point of view of one node shaking hands with another node (right now it's talking about both at the same time).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool got it, will do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would this be handled if the clients both have equal
latest_finalized_epoch
andbest_slot
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
then you discard the client because you know it's either at genesis or providing invalid data.. finalization happens several epochs behind
best_slot
in the happy case.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@arnetheduck ah no, I was referring to if both clients have the same
best_slot
andfinalized_epoch
together . So FinalizedEpoch A == FinalizedEpochB and BestSlot A == BestSlot BThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah. right. I find that the situation is somewhat analogous to receiving a block whose parent is unknown to it - you have to make a very similar decision there - the information in this hello, just like in the parentless block, is essentially useless, from a trust perspective, and you need to turn to other sources.
@djrtwo suggested that attestations might be a good heurestic as signing one carries risk for the validator that does so. The information here can, from what I can see, be used to quickly disconnect from a client, if they're saying the network is different. The rest is advisory, and you're hoping for the best.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some more from the top of my head that might be helpful:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't we still be connected after
sync finished
? We would still need to propagate any newly proposed blocks to our peersThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
generally, the standard way to sync in these kinds of "live" protocols is to start listening to broadcasts, then initiate sync.. else you'll miss packets during sync and will have to recover again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
which beacon block roots? known heads? all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need the root? It seems redundant to me, except for the case of chain reorgs which shouldn't happen frequently at sync (and even then, it's probably better to get blocks from the current chain that we'll be able to use later, instead of getting outdated ones).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ẁe need a mechanism for recovering blocks, in case something is lost or the client goes offline for a short bit and loses a few (computer went to sleep / ISP went down for 10 minutes).
I argue in the original issue (#692 (comment)) that it's often natural to request blocks backwards for this reason: the data structure we're syncing is a singly linked list pointing backwards in time and we receive attestations and blocks that let us discover heads "naturally" by listening to the broadcasts. With a
block_root+previous_n_blocks
kind of request we can both sync and recover, and for example use attestations to discover "viable" heads to work on, from a sync or recovery perspective. Indeed, negotiating finalized epochs in the handshake is somewhat redundant in that case, albeit a nice optimization (except for the chain id) - we could equally well request blocks from the peer that gossiped us the block or attestation whose parent we're missing - they should not be gossiping attestations they have not linked to a finalized epoch of value.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting! To summarize my understanding of your comment: Syncing forwards is safer as we can verify each block immediately when we receive it, but syncing backwards is more efficient/doesn't require additional database indexing (and I guess syncing forwards may require a negotiating phase to discover the best shared block). You're proposing to interpret the fact that I see lots of attestations on top of my sync peer's head flying around the network as evidence that their head is valid? And therefore, I'd be pretty safe syncing backwards?
That sounds reasonable. My original concern was that this requires me to know (at least some good fraction of) the validator set as otherwise my sync peer could create lots of fraudulent attestations for free that I have no chance of verifying. But I would notice this if I have at least one single honest peer (if I try to sync from them or compare the attestations coming from them).
Do you think having only a backwards sync is fine or do we need both (e.g. for highly adversarial environments, or resource constrained devices that don't participate in gossiping?).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In terms of network / bandwidth, I'd say it's about the same but there are some nuances:
(latest-head, known_slot_number)
request ("give me the block you consider to be the head, and history back to slot N") to alleviate this race, but then the server selects the head.In terms of client implementations, I think of backward sync as biased to make it cheaper for the server: the server already has the data necessary - also because the head is kept hot - while the client has to keep a chain of "unknown" blocks around / can't validate eagerly. An additional rule that the response must be forward-ordered could help the client apply / validate the blocks eagerly.
The backwards sync can be seen as more passive/reactive/lazy while forward sync is more active..
right. the assumption rests on several premises (thanks @djrtwo!):
I'm not sure :) I'm curious to hear feedback on this point, but here are some thoughts:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems reasonable to sync backwards from the latest received gossiped block (at least as an initial implementation)
Do we really need start_slot? if we give clients the option to request a block by either start_slot or start_root then that forces us to maintain a lookup or search mechanism for both. if we are saying that both fields (start_slot and start_root) required to sync, then I would disagree. we should be able to simply perform a lookup by block_root and walk the chain backwards until we reach max_headers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or even better, latest gossiped attestation
I would say that if we go with backwards sync, we should not implement forwards sync here or elsewhere unless there's a strong case for that direction. Having to implement both directions negates some of the benefits of backward sync and adds implementation surface.
It is quite possible to add forward sync in a later version of the protocol as well should it prove necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can dig that
@arnetheduck or anyone else. Why do we need start_slot
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I note BlockHeader is not defined in the beacon chain spec. I opened a PR to define it as a struct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's not needed specifically for the spec, we could also just define it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just like in LES, I would use different method ids for requests and responses. So it's possible for me to send you proactively blocks and headers using RPC, and you don't need to know about it in advance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to me that when everything is going smoothly, block bodies consist of very few attestations (they should be combined by then), and a few minor items like the transfers etc. has anything looked at the numbers to see how much value there is in having separate requests for headers and bodies? Requesting headers then bodies creates additional round-trips which are a cost on its own.