-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sending a Request::Peers
request to a zcashd
node results in a spurious Response::Peers
#5421
Comments
Zebra doesn't handle unsolicited address advertisements using the inbound service, for security reasons. If it did, a node could eventually replace all the addresses in its address book, by using large unsolicited requests. Instead, if there is no active peers request, Zebra caches the last unsolicited peers response from each peer, then uses it to provide a synthetic response to the next peers request to that peer. But it prefers multi-peer responses to single-peer responses. (If an unsolicited response only contains one peer, and there is already a cached multi-peer response, it keeps the multi-peer response. Otherwise, it replaces the older response with a newer one.) So I'm not sure if there is actually a bug here, the actual peers response will get cached, and returned in response to the next peers request. This works well for If it doesn't work for the DNS seeder, we could add a config option to turn off the caching, or to ignore single-address responses? |
Yeah, an option to ignore caching altogether would be optimal! Thanks for explaining this clearly (I found the PR where this was introduced #3294) and it does make sense for the |
After thinking about this a bit more, I'm not sure if disabling caching will always work for you. I think there's a race condition. The problem is that:
If you want to get the unfiltered responses, then disabling caching might not help, because:
So with caching, you can re-request the peers and get the cached multi-peer response. Without it, the multi-peer response could get dropped. Instead, we could always send every peers response to the inbound service, regardless of caching or any requests. That way, you could get a reliable stream of peers by making requests, and processing the responses via a separate inbound service for each isolated peer? |
That rationale makes sense, and thank you for explaining the moving parts here. Regardless of what action we'll take, these details are very helpful.
I think this would be a pretty invasive change for Such a change would still require me to write code that distinguishes between the (relatively useless, for my purpose) single-address peers responses and the multi-peer responses (since I would be fed every peers response) -- and once I have that code, I can feed it the cached responses and get pretty much the same results, since the The thing I want to make sure is that the peers response cache is per-connection and gets initialized anew each time there's a new connection (that there isn't a global cache or something like that). |
Yes, that's correct, the cache is per-connection. A new Here's the cache implementation: Struct: zebra/zebra-network/src/peer/connection.rs Lines 465 to 472 in 868ba13
Store: zebra/zebra-network/src/peer/connection.rs Lines 1071 to 1091 in 868ba13
Retrieve: zebra/zebra-network/src/peer/connection.rs Lines 875 to 885 in 868ba13
|
Perfect, thank you so much for your support and explanations! |
I have this working by the way! |
The spurious
Response::Peers
value is unit length and only contains the IP address of the node we connected to. Here's what it looks like:Code to reproduce this is at https://github.com/superbaud/zcash-cotyledon/blob/main/src/main.rs
Looking at the
debug.log
from thezcashd
node, we see:and it indeed looks like the spurious
Response::Peers
is from the functionAdvertizeLocal
inzcashd
, which sends its own address -- unsolicited -- to the connected peer (withPushAddress
).My guess is that the
zebra-networking
code interprets that unsolicited address as a reply to theRequest::Peers
request (rather than dropping it on the floor or handing it to the inbound service, if one is present).The text was updated successfully, but these errors were encountered: