Replies: 4 comments 3 replies
-
From my perspective, there should be feature-parity between both of these mechanisms. It really comes down to how you use rust-libp2p. If you are developing a reusable and hence pluggable behaviour, creating a new For prototyping and application development, I tend to compose existing behaviours together through the custom derive, forward all events by combining them in enums and poll the swarm in an event loop, matching on all emitted events and reacting accordingly. Your usecase sounds like you could make use of either, given that you are hiding the |
Beta Was this translation helpful? Give feedback.
-
That makes sense to me, I agree with the intention. Regarding the implementation, especially in light of the inclusion of |
Beta Was this translation helpful? Give feedback.
-
Agreed with @thomaseizinger that the two should have feature parity. Thus, please file a bug or propose a patch for any discrepancy.
The handler would be an exception, as we want to return the handler to its creator (the
I am not sure I follow. The returned Lines 192 to 198 in e19391e |
Beta Was this translation helpful? Give feedback.
-
I’ll have to think more about the role of the Handler. On the last point: My assumption is that I get an OutgoingConnectionError for every dial that is started, and I’d like to raise the Unreachable event only when the network transitions from “some dials in progress” to “no dials in progress, all failed”. The issue here is keeping track of ongoing dial attempts from within a sub-behaviour — since the swarm event only contains the PeerId and not the DialOpts (I may have a general dial running plus one with an address that is not yet known to the swarm, for validation). |
Beta Was this translation helpful? Give feedback.
-
Hi everyone!
I’m currently working on bringing ipfs-embed up to date with libp2p 0.41 — we used a fork for a while due to an unresolved issue with TCP simultaneous open that I’m now working around in a completely different way. My perspective onto ipfs-embed (which may be markedly different from @dvc94ch’s, who created it) is that it basically ties a neat bundle of IFPS functionality out of existing transports and protocols and offers a shim layer to shield users from changes to the details. The main additional feature it offers is the address book, which I now completely overhauled, also due to the nature of the aforementioned workaround: the basic idea is that peer addresses are only considered “confirmed” (and used for general dialling) when a successful address-based dailling attempt has been made — an unsuccessful attempt will remove the address (it may be added again and confirmed later). This procedure allows me to switch off PortReuse, i.e. outgoing connections now originate from random ports instead of the listen port, which in turn removes the danger of TCP sim-open.
Now to my question.
There are two mechanisms for observing the swarm state: SwarmEvents can be seen from the outside, while (sub)behaviours see callbacks. These two interfaces are driven from the very same piece of code in
Swarm::poll_next_event
. The mechanisms have large overlap, but also notable differences — in practice I need to use both of them to get all the information I need. Since there does not seem to be a technical reason for this discrepancy, I’m asking for clarification of what the respective purpose is, so that I can then propose concrete improvements for the shortcomings I currently observe.BTW: in 0.39 there was a way for the behaviour to find out whether all addresses have been dialled without success, in which case ipfs-embed emitted the
Unreachable(peer)
event. This is no longer possible since there is no API to figure out whether a given peer is currently being dialled and the events/callbacks also don’t carry this information. Again, I’ll propose improvements once I understand the purpose of the various pieces involved.In closing, I’d like to say that 0.41 has markedly improved the visibility into when and why connections are going awry, this direction is definitely very good for everyone who operates libp2p-based systems in production! There are some minor things that can be improved, of course, and I’ll open PRs once the dust settles.
Beta Was this translation helpful? Give feedback.
All reactions