Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ping: Repeated errors emitted from ping behaviour #4410

Closed
dariusc93 opened this issue Aug 30, 2023 · 11 comments · Fixed by #4423
Closed

ping: Repeated errors emitted from ping behaviour #4410

dariusc93 opened this issue Aug 30, 2023 · 11 comments · Fixed by #4423

Comments

@dariusc93
Copy link
Member

Summary

I dont know if this is considered as a bug in the handler in the ping protocol, or if something is going on between go-libp2p and rust-libp2p, but when dialing a node of go-libp2p that is discovered via mdns (in my case, indirectly by adding an explicit peer to gossipsub) would cause one or more of the connections to report a "connection is closed" and/or "eof" error from the ping event. This other node does support the ping protocol, but event with the error would continue to be emitted unless the connection is closed via Swarm::close_connection. However, this may or may not yield undesirable results when interacting with the node in some way if the connection handler is operational in a different behaviour or protocol.

Debug Output A lot is omitted due to it repeating the error and can provide a sample code used if requested

[2023-08-30T00:46:02Z DEBUG libp2p_tcp] listening on 0.0.0.0:0
[2023-08-30T00:46:02Z INFO  libp2p_mdns::behaviour::iface] creating instance on iface 10.0.24.180
[2023-08-30T00:46:02Z DEBUG libp2p_tcp] New listen address: /ip4/127.0.0.1/tcp/41635
[2023-08-30T00:46:02Z DEBUG libp2p_swarm] Listener ListenerId(1); New address: "/ip4/127.0.0.1/tcp/41635"
Listening on /ip4/127.0.0.1/tcp/41635
[2023-08-30T00:46:02Z DEBUG libp2p_tcp] New listen address: /ip4/10.0.24.180/tcp/41635
[2023-08-30T00:46:02Z DEBUG libp2p_swarm] Listener ListenerId(1); New address: "/ip4/10.0.24.180/tcp/41635"
Listening on /ip4/10.0.24.180/tcp/41635
[2023-08-30T00:46:02Z INFO  libp2p_mdns::behaviour] discovered: 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx /ip4/10.0.24.180/udp/4001/quic
[2023-08-30T00:46:02Z INFO  libp2p_mdns::behaviour] discovered: 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx /ip4/10.0.24.180/udp/4001/quic-v1
[2023-08-30T00:46:02Z INFO  libp2p_mdns::behaviour] discovered: 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx /ip4/10.0.24.180/tcp/4001
[2023-08-30T00:46:02Z DEBUG libp2p_gossipsub::behaviour] Adding explicit peer 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx
[2023-08-30T00:46:02Z DEBUG libp2p_gossipsub::behaviour] Connecting to explicit peer PeerId("12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx")
[2023-08-30T00:46:02Z DEBUG libp2p_gossipsub::behaviour] Adding explicit peer 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx
[2023-08-30T00:46:02Z DEBUG libp2p_gossipsub::behaviour] Connecting to explicit peer PeerId("12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx")
[2023-08-30T00:46:02Z DEBUG libp2p_gossipsub::behaviour] Adding explicit peer 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx
[2023-08-30T00:46:02Z DEBUG libp2p_gossipsub::behaviour] Connecting to explicit peer PeerId("12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx")
[2023-08-30T00:46:02Z DEBUG libp2p_tcp] dialing 10.0.24.180:4001
[2023-08-30T00:46:02Z DEBUG libp2p_tcp] dialing 10.0.24.180:4001
[2023-08-30T00:46:02Z DEBUG libp2p_tcp] dialing 10.0.24.180:4001
[2023-08-30T00:46:02Z DEBUG libp2p_kad::handler] Operating in client-mode on new outbound connection to 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx
[2023-08-30T00:46:02Z DEBUG libp2p_swarm] Connection established: PeerId("12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx") Dialer { address: "/ip4/10.0.24.180/tcp/4001/p2p/12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx", role_override: Dialer }; Total (peer): 1.
[2023-08-30T00:46:02Z DEBUG libp2p_gossipsub::behaviour] New peer connected: 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx
PeerId: 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx - Connection ConnectionId(2) - point Dialer { address: "/ip4/10.0.24.180/tcp/4001/p2p/12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx", role_override: Dialer }
[2023-08-30T00:46:02Z DEBUG libp2p_kad::handler] Operating in client-mode on new outbound connection to 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx
[2023-08-30T00:46:02Z DEBUG libp2p_swarm] Connection established: PeerId("12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx") Dialer { address: "/ip4/10.0.24.180/tcp/4001/p2p/12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx", role_override: Dialer }; Total (peer): 2.
PeerId: 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx - Connection ConnectionId(1) - point Dialer { address: "/ip4/10.0.24.180/tcp/4001/p2p/12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx", role_override: Dialer }
[2023-08-30T00:46:02Z DEBUG libp2p_kad::handler] Operating in client-mode on new outbound connection to 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx
[2023-08-30T00:46:02Z DEBUG libp2p_swarm] Connection established: PeerId("12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx") Dialer { address: "/ip4/10.0.24.180/tcp/4001/p2p/12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx", role_override: Dialer }; Total (peer): 3.
PeerId: 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx - Connection ConnectionId(3) - point Dialer { address: "/ip4/10.0.24.180/tcp/4001/p2p/12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx", role_override: Dialer }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] latency to 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx is 0ms
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Kind(UnexpectedEof) }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] latency to 12D3KooWNynQdTDFKLFRQTCZD4VxCis7RYXbBmd5Mtt57VpZ6Gtx is 0ms
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/5: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/7: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/9: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/11: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/13: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/15: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/17: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/19: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/21: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/23: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/25: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/27: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/29: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/31: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/33: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/35: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/37: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/39: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/41: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/43: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/45: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/47: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/49: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/51: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/53: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/55: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/57: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/59: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/61: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/63: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/65: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/67: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/69: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/71: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/73: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/75: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/77: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/79: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/81: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/83: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/85: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/87: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/89: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/91: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/93: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/95: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/97: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/99: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/101: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/103: connection is closed" } }
[2023-08-30T00:46:02Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/105: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/107: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/109: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/111: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/113: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/115: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/117: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/119: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/121: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/123: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/125: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/127: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/129: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/131: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/133: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/135: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/137: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/139: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/141: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/143: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/145: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/147: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/149: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/151: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/153: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/155: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/157: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/159: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/161: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/163: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/165: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/167: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/169: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/171: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/173: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/175: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/177: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/179: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/181: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/183: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/185: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/187: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/189: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/191: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/193: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/195: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/197: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/199: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/201: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/203: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/205: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/207: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/209: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/211: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/213: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/215: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/217: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/219: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/221: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/223: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/225: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/227: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/229: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/231: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/233: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/235: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/237: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/239: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/241: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/243: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/245: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/247: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/249: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/251: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/253: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/255: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/257: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/259: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/261: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/263: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/265: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/267: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/269: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/271: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/273: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/275: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/277: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/279: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/281: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/283: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/285: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/287: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/289: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/291: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/293: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/295: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/297: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/299: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/301: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/303: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/305: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/307: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/309: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/311: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/313: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/315: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/317: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/319: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/321: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/323: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/325: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/327: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/329: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/331: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/333: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/335: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/337: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/339: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/341: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/343: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/345: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/347: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/349: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/351: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/353: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/355: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/357: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/359: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/361: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/363: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/365: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/367: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/369: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/371: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/373: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/375: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/377: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/379: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/381: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/383: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/385: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/387: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/389: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/391: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Kind(UnexpectedEof) }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/395: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/397: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/399: connection is closed" } }
[2023-08-30T00:46:03Z DEBUG libp2p_ping::handler] Ping failure: Other { error: Custom { kind: WriteZero, error: "7474e9b7/401: connection is closed" } }

Possible Solution

Maybe change the state of the connection handler to inactive to prevent more errors from being reported to swarm especially if there is no change in failure rate or if the failure rate is high (anything over 10 might be worth making inactive?), though this doesnt exactly target the cause of "connection is closed" error.

Version

  • libp2p version (version number, commit, or branch): 0.52.3

Would you like to work on fixing this bug?

Maybe, depending on the best way to go about this besides changing the state

@dariusc93
Copy link
Member Author

Also to add, a workaround for me would be to make sure that the peer condition isnt NotDialing (likely through my own means), which would make sure there is only a single connection attempt, but even then with 2 rust-libp2p modes with multiple connections does not yield this issue.

@mxinden
Copy link
Member

mxinden commented Aug 30, 2023

Thank you for reporting @dariusc93.

Seems like the ping handler is continuously trying to open a new stream to the remote:

None => {
self.outbound = Some(OutboundState::OpenStream);
let protocol = SubstreamProtocol::new(ReadyUpgrade::new(PROTOCOL_NAME), ());
return Poll::Ready(ConnectionHandlerEvent::OutboundSubstreamRequest {
protocol,
});
}

for that new outbound stream to then fail.

fn on_dial_upgrade_error(
&mut self,
DialUpgradeError { error, .. }: DialUpgradeError<
<Self as ConnectionHandler>::OutboundOpenInfo,
<Self as ConnectionHandler>::OutboundProtocol,
>,
) {
self.outbound = None; // Request a new substream on the next `poll`.
let error = match error {
StreamUpgradeError::NegotiationFailed => {
debug_assert_eq!(self.state, State::Active);
self.state = State::Inactive { reported: false };
return;
}
// Note: This timeout only covers protocol negotiation.
StreamUpgradeError::Timeout => Failure::Other {
error: Box::new(std::io::Error::new(
std::io::ErrorKind::TimedOut,
"ping protocol negotiation timed out",
)),
},
StreamUpgradeError::Apply(e) => void::unreachable(e),
StreamUpgradeError::Io(e) => Failure::Other { error: Box::new(e) },
};
self.pending_errors.push_front(error);
}
}

Ideally we would switch into State::Inactive.

Do you know why the go-libp2p node drops the stream instead of denying the ping protocol?

Can you post the logs of the go node here?

@thomaseizinger
Copy link
Contributor

Ideally we would switch into State::Inactive.

We do that already if the negotiation fails. I don't think we should switch to Inactive on every IO error. We silence the first IO error to be compatible with implementations that use one stream per ping. If that is not the case, then that is a bug :)

I am not sure we should do anything different for subsequent IO errors other than reporting them? As of #3947, it is the user's responsibility to implement a connection management policy based on the ping protocol.

@dariusc93
Copy link
Member Author

Thank you for reporting @dariusc93.

Seems like the ping handler is continuously trying to open a new stream to the remote:

None => {
self.outbound = Some(OutboundState::OpenStream);
let protocol = SubstreamProtocol::new(ReadyUpgrade::new(PROTOCOL_NAME), ());
return Poll::Ready(ConnectionHandlerEvent::OutboundSubstreamRequest {
protocol,
});
}

for that new outbound stream to then fail.

fn on_dial_upgrade_error(
&mut self,
DialUpgradeError { error, .. }: DialUpgradeError<
<Self as ConnectionHandler>::OutboundOpenInfo,
<Self as ConnectionHandler>::OutboundProtocol,
>,
) {
self.outbound = None; // Request a new substream on the next `poll`.
let error = match error {
StreamUpgradeError::NegotiationFailed => {
debug_assert_eq!(self.state, State::Active);
self.state = State::Inactive { reported: false };
return;
}
// Note: This timeout only covers protocol negotiation.
StreamUpgradeError::Timeout => Failure::Other {
error: Box::new(std::io::Error::new(
std::io::ErrorKind::TimedOut,
"ping protocol negotiation timed out",
)),
},
StreamUpgradeError::Apply(e) => void::unreachable(e),
StreamUpgradeError::Io(e) => Failure::Other { error: Box::new(e) },
};
self.pending_errors.push_front(error);
}
}

Ideally we would switch into State::Inactive.

Do you know why the go-libp2p node drops the stream instead of denying the ping protocol?

Can you post the logs of the go node here?

I havent look to much into it yet but suspect that that its related to the multiple connections made that something go-libp2p doesnt like from what i gathered. At least related to the ping behaviour. Can look more into it a little later though and collect the logs from go-libp2p side.

I am not sure we should do anything different for subsequent IO errors other than reporting them? As of #3947, it is the user's responsibility to implement a connection management policy based on the ping protocol.

While we can report them, if the failure rate continue to go up for that handler without resetting, I do believe that it should be made inactive for obvious reason rather than for it to be closed by the user, especially if that connection is successfully active in other behaviours (which seems to be the case from what I can tell). Of course, if the user wish to close it, then that is their choice too, but most beginners may not know that to be the case and may end up being subjected to a DoS due to the recurring events if they run into this issue.

@thomaseizinger
Copy link
Contributor

if the failure rate continue to go up for that handler without resetting

What would you say is a reasonable setting for this? I am struggling to come up with a good value here.

but most beginners may not know that to be the case and may end up being subjected to a DoS due to the recurring events if they run into this issue.

What DoS case are you seeing? The worst I can think of is some kind of log spam if the user decided to log every error that is coming in.

@dariusc93
Copy link
Member Author

What would you say is a reasonable setting for this? I am struggling to come up with a good value here.

I would probably say anything over 10 might be reasonable since i dont think there is a case where there would be more than 10 consecutive errors before the rate actually resets, although setting it as high as 50 could also be reasonable too.

What DoS case are you seeing? The worst I can think of is some kind of log spam if the user decided to log every error that is coming in.

From what I've observed, the biggest one would be high cpu usage. If there is more than one handler reporting errors like it was described above (which can be the case), the usage would be higher with the potential of rendering the host unusable or unstable as a result (assuming the node doesnt get killed by the system first)

@dariusc93
Copy link
Member Author

Do you know why the go-libp2p node drops the stream instead of denying the ping protocol?

Can you post the logs of the go node here?

2023-08-31T11:41:44.354-0400	DEBUG	rcmgr	resource-manager/limit.go:84	initializing new limiter with config	{"limits": {}}
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	transport.Upgrader <= fx.Annotate(github.com/libp2p/go-libp2p/p2p/net/upgrader.New(), fx.ParamTags(["name:\"security\""])
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] SUPPLY	[]upgrader.StreamMuxer
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] SUPPLY	peer.ID
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	host.Host <= github.com/libp2p/go-libp2p/config.(*Config).addTransports.func2()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	crypto.PrivKey <= github.com/libp2p/go-libp2p/config.(*Config).addTransports.func3()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	connmgr.ConnectionGater <= github.com/libp2p/go-libp2p/config.(*Config).addTransports.func4()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	pnet.PSK <= github.com/libp2p/go-libp2p/config.(*Config).addTransports.func5()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	network.ResourceManager <= github.com/libp2p/go-libp2p/config.(*Config).addTransports.func6()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	*madns.Resolver <= github.com/libp2p/go-libp2p/config.(*Config).addTransports.func7()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	transport.Transport[group = "transport"] <= fx.Annotate(github.com/libp2p/go-libp2p/p2p/transport/tcp.NewTCPTransport(), fx.ResultTags(["group:\"transport\""]), fx.As([[transport.Transport]])
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	transport.Transport[group = "transport"] <= fx.Annotate(github.com/libp2p/go-libp2p/p2p/transport/quic.NewTransport(), fx.ResultTags(["group:\"transport\""]), fx.As([[transport.Transport]])
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	transport.Transport[group = "transport"] <= fx.Annotate(github.com/libp2p/go-libp2p/p2p/transport/websocket.New(), fx.ResultTags(["group:\"transport\""]), fx.As([[transport.Transport]])
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	transport.Transport[group = "transport"] <= fx.Annotate(github.com/libp2p/go-libp2p/p2p/transport/webtransport.New(), fx.ResultTags(["group:\"transport\""]), fx.As([[transport.Transport]])
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] SUPPLY	protocol.ID
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	sec.SecureTransport[group = "security_unordered"] <= fx.Annotate(github.com/libp2p/go-libp2p/p2p/security/tls.New(), fx.ParamTags(["name:\"security_/tls/1.0.0\""]), fx.ResultTags(["group:\"security_unordered\""]), fx.As([[sec.SecureTransport]])
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] SUPPLY	protocol.ID
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	sec.SecureTransport[group = "security_unordered"] <= fx.Annotate(github.com/libp2p/go-libp2p/p2p/security/noise.New(), fx.ParamTags(["name:\"security_/noise\""]), fx.ResultTags(["group:\"security_unordered\""]), fx.As([[sec.SecureTransport]])
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	[]sec.SecureTransport[name = "security"] <= fx.Annotate(github.com/libp2p/go-libp2p/config.(*Config).addTransports.func9(), fx.ParamTags(["group:\"security_unordered\""]), fx.ResultTags(["name:\"security\""])
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	quic.StatelessResetKey <= github.com/libp2p/go-libp2p/config.PrivKeyToStatelessResetKey()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	*quicreuse.ConnManager <= github.com/libp2p/go-libp2p/p2p/transport/quicreuse.NewConnManager()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	fx.Lifecycle <= go.uber.org/fx.New.func1()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	fx.Shutdowner <= go.uber.org/fx.(*App).shutdowner-fm()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] PROVIDE	fx.DotGraph <= go.uber.org/fx.(*App).dotGraph-fm()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] LOGGER	Initialized custom logger from github.com/libp2p/go-libp2p/config.(*Config).addTransports.func1()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] INVOKE		fx.Annotate(github.com/libp2p/go-libp2p/config.(*Config).addTransports.func10(), fx.ParamTags(["group:\"transport\""])
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	supply: stub(protocol.ID)
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: github.com/libp2p/go-libp2p/config.(*Config).addTransports.func3()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	supply: stub([]upgrader.StreamMuxer)
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: fx.Annotate(github.com/libp2p/go-libp2p/p2p/security/tls.New(), fx.ParamTags(["name:\"security_/tls/1.0.0\""]), fx.ResultTags(["group:\"security_unordered\""]), fx.As([[sec.SecureTransport]])
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	supply: stub(protocol.ID)
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: fx.Annotate(github.com/libp2p/go-libp2p/p2p/security/noise.New(), fx.ParamTags(["name:\"security_/noise\""]), fx.ResultTags(["group:\"security_unordered\""]), fx.As([[sec.SecureTransport]])
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: fx.Annotate(github.com/libp2p/go-libp2p/config.(*Config).addTransports.func9(), fx.ParamTags(["group:\"security_unordered\""]), fx.ResultTags(["name:\"security\""])
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: github.com/libp2p/go-libp2p/config.(*Config).addTransports.func5()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: github.com/libp2p/go-libp2p/config.(*Config).addTransports.func6()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: github.com/libp2p/go-libp2p/config.(*Config).addTransports.func4()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: fx.Annotate(github.com/libp2p/go-libp2p/p2p/net/upgrader.New(), fx.ParamTags(["name:\"security\""])
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: fx.Annotate(github.com/libp2p/go-libp2p/p2p/transport/tcp.NewTCPTransport(), fx.ResultTags(["group:\"transport\""]), fx.As([[transport.Transport]])
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: github.com/libp2p/go-libp2p/config.PrivKeyToStatelessResetKey()
2023-08-31T11:41:44.357-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: github.com/libp2p/go-libp2p/p2p/transport/quicreuse.NewConnManager()
2023-08-31T11:41:44.358-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: fx.Annotate(github.com/libp2p/go-libp2p/p2p/transport/quic.NewTransport(), fx.ResultTags(["group:\"transport\""]), fx.As([[transport.Transport]])
2023-08-31T11:41:44.358-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: fx.Annotate(github.com/libp2p/go-libp2p/p2p/transport/websocket.New(), fx.ResultTags(["group:\"transport\""]), fx.As([[transport.Transport]])
2023-08-31T11:41:44.358-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: fx.Annotate(github.com/libp2p/go-libp2p/p2p/transport/webtransport.New(), fx.ResultTags(["group:\"transport\""]), fx.As([[transport.Transport]])
2023-08-31T11:41:44.358-0400	DEBUG	p2p-config	config/log.go:21	[Fx] INVOKE		github.com/libp2p/go-libp2p/p2p/protocol/circuitv2/client.AddTransport()
2023-08-31T11:41:44.358-0400	DEBUG	p2p-config	config/log.go:21	[Fx] RUN	provide: github.com/libp2p/go-libp2p/config.(*Config).addTransports.func2()
2023/08/31 11:41:44 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.
2023-08-31T11:41:44.358-0400	DEBUG	net/identify	identify/id.go:583	updating snapshot	{"seq": 1, "addrs": ["/ip4/10.0.24.180/tcp/9000","/ip4/10.0.24.180/udp/9000/quic","/ip4/10.0.24.180/udp/9000/quic-v1","/ip4/127.0.0.1/tcp/9000","/ip4/127.0.0.1/udp/9000/quic","/ip4/127.0.0.1/udp/9000/quic-v1"]}
2023/08/31 11:41:44 Hello World, my second hosts ID is 12D3KooWPXWoJ9YAg5391tiejurTWrrYtMDNVkTCiYvJAuN15evK
2023-08-31T11:41:44.360-0400	DEBUG	net/identify	identify/id.go:583	updating snapshot	{"seq": 2, "addrs": ["/ip4/10.0.24.180/tcp/9000","/ip4/10.0.24.180/udp/9000/quic","/ip4/10.0.24.180/udp/9000/quic-v1","/ip4/127.0.0.1/tcp/9000","/ip4/127.0.0.1/udp/9000/quic","/ip4/127.0.0.1/udp/9000/quic-v1"]}
2023-08-31T11:41:44.360-0400	DEBUG	upgrader	upgrader/listener.go:110	listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/9000> got connection: /ip4/10.0.24.180/tcp/9000 <---> /ip4/10.0.24.180/tcp/37314
2023-08-31T11:41:44.360-0400	DEBUG	upgrader	upgrader/listener.go:110	listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/9000> got connection: /ip4/10.0.24.180/tcp/9000 <---> /ip4/10.0.24.180/tcp/37316
2023-08-31T11:41:44.360-0400	DEBUG	upgrader	upgrader/listener.go:110	listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/9000> got connection: /ip4/10.0.24.180/tcp/9000 <---> /ip4/10.0.24.180/tcp/37326
2023-08-31T11:41:44.361-0400	DEBUG	upgrader	upgrader/listener.go:134	listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/9000> accepted connection: <stream.Conn[TCP] /ip4/10.0.24.180/tcp/9000 (12D3KooWPXWoJ9YAg5391tiejurTWrrYtMDNVkTCiYvJAuN15evK) <-> /ip4/10.0.24.180/tcp/37316 (12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN)>
2023-08-31T11:41:44.361-0400	DEBUG	swarm2	swarm/swarm_listen.go:141	swarm listener accepted connection: /ip4/10.0.24.180/tcp/9000 <-> /ip4/10.0.24.180/tcp/37316
2023-08-31T11:41:44.361-0400	DEBUG	upgrader	upgrader/listener.go:134	listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/9000> accepted connection: <stream.Conn[TCP] /ip4/10.0.24.180/tcp/9000 (12D3KooWPXWoJ9YAg5391tiejurTWrrYtMDNVkTCiYvJAuN15evK) <-> /ip4/10.0.24.180/tcp/37314 (12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN)>
2023-08-31T11:41:44.361-0400	DEBUG	swarm2	swarm/swarm_listen.go:141	swarm listener accepted connection: /ip4/10.0.24.180/tcp/9000 <-> /ip4/10.0.24.180/tcp/37314
2023-08-31T11:41:44.361-0400	DEBUG	upgrader	upgrader/listener.go:134	listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/9000> accepted connection: <stream.Conn[TCP] /ip4/10.0.24.180/tcp/9000 (12D3KooWPXWoJ9YAg5391tiejurTWrrYtMDNVkTCiYvJAuN15evK) <-> /ip4/10.0.24.180/tcp/37326 (12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN)>
2023-08-31T11:41:44.361-0400	DEBUG	swarm2	swarm/swarm_listen.go:141	swarm listener accepted connection: /ip4/10.0.24.180/tcp/9000 <-> /ip4/10.0.24.180/tcp/37326
2023-08-31T11:41:44.362-0400	DEBUG	net/identify	identify/id.go:512	/ipfs/id/1.0.0 received message from 12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN /ip4/10.0.24.180/tcp/37316
2023-08-31T11:41:44.362-0400	DEBUG	net/identify	identify/id.go:778	12D3KooWPXWoJ9YAg5391tiejurTWrrYtMDNVkTCiYvJAuN15evK received listen addrs for 12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: [/ip4/127.0.0.1/tcp/40477 /ip4/10.0.24.180/tcp/40477]
2023-08-31T11:41:44.362-0400	DEBUG	basichost	basic/basic_host.go:438	negotiated: /ipfs/id/1.0.0 (took 10.731µs)
2023-08-31T11:41:44.362-0400	DEBUG	basichost	basic/basic_host.go:438	negotiated: /ipfs/ping/1.0.0 (took 18.841µs)
2023-08-31T11:41:44.362-0400	DEBUG	net/identify	identify/obsaddr.go:429	observed multiaddr doesn't match the transports of any announced addresses	{"from": "/ip4/10.0.24.180/tcp/37316", "observed": "/ip4/10.0.24.180/tcp/9000/p2p/12D3KooWPXWoJ9YAg5391tiejurTWrrYtMDNVkTCiYvJAuN15evK"}
2023-08-31T11:41:44.362-0400	DEBUG	net/identify	identify/id.go:456	sending snapshot	{"seq": 2, "protocols": ["/ipfs/id/1.0.0","/ipfs/id/push/1.0.0","/ipfs/ping/1.0.0","/libp2p/circuit/relay/0.2.0/stop"], "addrs": ["/ip4/10.0.24.180/tcp/9000","/ip4/10.0.24.180/udp/9000/quic","/ip4/10.0.24.180/udp/9000/quic-v1","/ip4/127.0.0.1/tcp/9000","/ip4/127.0.0.1/udp/9000/quic","/ip4/127.0.0.1/udp/9000/quic-v1"]}
2023-08-31T11:41:44.362-0400	DEBUG	basichost	basic/basic_host.go:438	negotiated: /ipfs/id/1.0.0 (took 10.031µs)
2023-08-31T11:41:44.362-0400	DEBUG	net/identify	identify/id.go:512	/ipfs/id/1.0.0 received message from 12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN /ip4/10.0.24.180/tcp/37314
2023-08-31T11:41:44.362-0400	DEBUG	basichost	basic/basic_host.go:438	negotiated: /ipfs/id/1.0.0 (took 16.841µs)
2023-08-31T11:41:44.363-0400	DEBUG	basichost	basic/basic_host.go:438	negotiated: /ipfs/ping/1.0.0 (took 19.361µs)
2023-08-31T11:41:44.363-0400	DEBUG	net/identify	identify/id.go:456	sending snapshot	{"seq": 2, "protocols": ["/ipfs/id/1.0.0","/ipfs/id/push/1.0.0","/ipfs/ping/1.0.0","/libp2p/circuit/relay/0.2.0/stop"], "addrs": ["/ip4/10.0.24.180/tcp/9000","/ip4/10.0.24.180/udp/9000/quic","/ip4/10.0.24.180/udp/9000/quic-v1","/ip4/127.0.0.1/tcp/9000","/ip4/127.0.0.1/udp/9000/quic","/ip4/127.0.0.1/udp/9000/quic-v1"]}
2023-08-31T11:41:44.363-0400	DEBUG	net/identify	identify/id.go:461	/ipfs/id/1.0.0 sending message to 12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN /ip4/10.0.24.180/tcp/37314
2023-08-31T11:41:44.363-0400	DEBUG	net/identify	identify/id.go:461	/ipfs/id/1.0.0 sending message to 12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN /ip4/10.0.24.180/tcp/37316
2023-08-31T11:41:44.363-0400	DEBUG	net/identify	identify/id.go:456	sending snapshot	{"seq": 2, "protocols": ["/ipfs/id/1.0.0","/ipfs/id/push/1.0.0","/ipfs/ping/1.0.0","/libp2p/circuit/relay/0.2.0/stop"], "addrs": ["/ip4/10.0.24.180/tcp/9000","/ip4/10.0.24.180/udp/9000/quic","/ip4/10.0.24.180/udp/9000/quic-v1","/ip4/127.0.0.1/tcp/9000","/ip4/127.0.0.1/udp/9000/quic","/ip4/127.0.0.1/udp/9000/quic-v1"]}
2023-08-31T11:41:44.363-0400	DEBUG	net/identify	identify/id.go:512	/ipfs/id/1.0.0 received message from 12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN /ip4/10.0.24.180/tcp/37326
2023-08-31T11:41:44.363-0400	DEBUG	net/identify	identify/id.go:461	/ipfs/id/1.0.0 sending message to 12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN /ip4/10.0.24.180/tcp/37326
2023-08-31T11:41:44.363-0400	DEBUG	net/identify	identify/obsaddr.go:429	observed multiaddr doesn't match the transports of any announced addresses	{"from": "/ip4/10.0.24.180/tcp/37314", "observed": "/ip4/10.0.24.180/tcp/9000/p2p/12D3KooWPXWoJ9YAg5391tiejurTWrrYtMDNVkTCiYvJAuN15evK"}
2023-08-31T11:41:44.363-0400	DEBUG	net/identify	identify/obsaddr.go:429	observed multiaddr doesn't match the transports of any announced addresses	{"from": "/ip4/10.0.24.180/tcp/37326", "observed": "/ip4/10.0.24.180/tcp/9000/p2p/12D3KooWPXWoJ9YAg5391tiejurTWrrYtMDNVkTCiYvJAuN15evK"}
2023-08-31T11:41:44.363-0400	DEBUG	net/identify	identify/id.go:778	12D3KooWPXWoJ9YAg5391tiejurTWrrYtMDNVkTCiYvJAuN15evK received listen addrs for 12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: [/ip4/127.0.0.1/tcp/40477 /ip4/10.0.24.180/tcp/40477]
2023-08-31T11:41:44.363-0400	DEBUG	net/identify	identify/id.go:778	12D3KooWPXWoJ9YAg5391tiejurTWrrYtMDNVkTCiYvJAuN15evK received listen addrs for 12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: [/ip4/10.0.24.180/tcp/40477 /ip4/127.0.0.1/tcp/40477]
2023-08-31T11:41:44.363-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.363-0400	DEBUG	basichost	basic/basic_host.go:418	protocol mux failed: stream reset (took 274.825µs)
2023-08-31T11:41:44.363-0400	DEBUG	basichost	basic/basic_host.go:438	negotiated: /ipfs/id/push/1.0.0 (took 7.701µs)
2023-08-31T11:41:44.363-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.363-0400	DEBUG	net/identify	identify/id.go:512	/ipfs/id/push/1.0.0 received message from 12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN /ip4/10.0.24.180/tcp/37316
2023-08-31T11:41:44.363-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.363-0400	DEBUG	net/identify	identify/id.go:778	12D3KooWPXWoJ9YAg5391tiejurTWrrYtMDNVkTCiYvJAuN15evK received listen addrs for 12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: [/ip4/127.0.0.1/tcp/40477 /ip4/10.0.24.180/tcp/40477]
2023-08-31T11:41:44.363-0400	DEBUG	net/identify	identify/obsaddr.go:429	observed multiaddr doesn't match the transports of any announced addresses	{"from": "/ip4/10.0.24.180/tcp/37316", "observed": "/ip4/10.0.24.180/tcp/9000/p2p/12D3KooWPXWoJ9YAg5391tiejurTWrrYtMDNVkTCiYvJAuN15evK"}
2023-08-31T11:41:44.363-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.363-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.364-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.364-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.364-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.364-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.364-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.364-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.364-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.365-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.365-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.365-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.365-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.365-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.365-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.365-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.365-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.366-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.366-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.366-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.366-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.366-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.366-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.366-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.366-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.367-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.367-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.367-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.367-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.367-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.367-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.367-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.367-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.367-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.368-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.368-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.368-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.368-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.368-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded
2023-08-31T11:41:44.368-0400	DEBUG	basichost	basic/basic_host.go:433	error setting stream protocol: protocol:/ipfs/ping/1.0.0.peer:12D3KooWDZ2fyf1k4D92jrXgKYrQXqHs6642e8Nm9vJkKPcziZyN: cannot reserve inbound stream: resource limit exceeded

@thomaseizinger
Copy link
Contributor

What DoS case are you seeing? The worst I can think of is some kind of log spam if the user decided to log every error that is coming in.

From what I've observed, the biggest one would be high cpu usage. If there is more than one handler reporting errors like it was described above (which can be the case), the usage would be higher with the potential of rendering the host unusable or unstable as a result (assuming the node doesnt get killed by the system first)

I am confused about that. Why would the CPU usage high?

I think the actual bug here is that we don't have a delay in the ping state machine on an error:

// Continue outbound pings.
match self.outbound.take() {
Some(OutboundState::Ping(mut ping)) => match ping.poll_unpin(cx) {
Poll::Pending => {
self.outbound = Some(OutboundState::Ping(ping));
break;
}
Poll::Ready(Ok((stream, rtt))) => {
log::debug!("latency to {} is {}ms", self.peer, rtt.as_millis());
self.failures = 0;
self.interval.reset(self.config.interval);
self.outbound = Some(OutboundState::Idle(stream));
return Poll::Ready(ConnectionHandlerEvent::NotifyBehaviour(Ok(rtt)));
}
Poll::Ready(Err(e)) => {
self.pending_errors.push_front(e);
}
},
Some(OutboundState::Idle(stream)) => match self.interval.poll_unpin(cx) {
Poll::Pending => {
self.outbound = Some(OutboundState::Idle(stream));
break;
}
Poll::Ready(()) => {
self.outbound = Some(OutboundState::Ping(
send_ping(stream, self.config.timeout).boxed(),
));
}
},
Some(OutboundState::OpenStream) => {
self.outbound = Some(OutboundState::OpenStream);
break;
}
None => {
self.outbound = Some(OutboundState::OpenStream);
let protocol = SubstreamProtocol::new(ReadyUpgrade::new(PROTOCOL_NAME), ());
return Poll::Ready(ConnectionHandlerEvent::OutboundSubstreamRequest {
protocol,
});
}
}
}

If we fail to open an outbound ping stream, we should still wait for the ping delay before we try again and not hammer the node with another stream.

@dariusc93
Copy link
Member Author

dariusc93 commented Sep 1, 2023

I am confused about that. Why would the CPU usage high?

I think the actual bug here is that we don't have a delay in the ping state machine on an error:

// Continue outbound pings.
match self.outbound.take() {
Some(OutboundState::Ping(mut ping)) => match ping.poll_unpin(cx) {
Poll::Pending => {
self.outbound = Some(OutboundState::Ping(ping));
break;
}
Poll::Ready(Ok((stream, rtt))) => {
log::debug!("latency to {} is {}ms", self.peer, rtt.as_millis());
self.failures = 0;
self.interval.reset(self.config.interval);
self.outbound = Some(OutboundState::Idle(stream));
return Poll::Ready(ConnectionHandlerEvent::NotifyBehaviour(Ok(rtt)));
}
Poll::Ready(Err(e)) => {
self.pending_errors.push_front(e);
}
},
Some(OutboundState::Idle(stream)) => match self.interval.poll_unpin(cx) {
Poll::Pending => {
self.outbound = Some(OutboundState::Idle(stream));
break;
}
Poll::Ready(()) => {
self.outbound = Some(OutboundState::Ping(
send_ping(stream, self.config.timeout).boxed(),
));
}
},
Some(OutboundState::OpenStream) => {
self.outbound = Some(OutboundState::OpenStream);
break;
}
None => {
self.outbound = Some(OutboundState::OpenStream);
let protocol = SubstreamProtocol::new(ReadyUpgrade::new(PROTOCOL_NAME), ());
return Poll::Ready(ConnectionHandlerEvent::OutboundSubstreamRequest {
protocol,
});
}
}
}

If we fail to open an outbound ping stream, we should still wait for the ping delay before we try again and not hammer the node with another stream.

That could be the reason for the high cpu usage is that its trying again right away only for it to get an error and continue to emit the events from the connection handler. Wasnt sure if that was intended to send right after a failure though (havent looked at the ping spec to determine that).

Something like the following should help with that

diff --git a/protocols/ping/src/handler.rs b/protocols/ping/src/handler.rs
index 0cf5c6e5..d9f14e4a 100644
--- a/protocols/ping/src/handler.rs
+++ b/protocols/ping/src/handler.rs
@@ -149,6 +149,8 @@ pub struct Handler {
     state: State,
     /// The peer we are connected to.
     peer: PeerId,
+    /// delay after a failure is reported
+    failure_delay: Option<Delay>,
 }
 
 #[derive(Debug, Clone, Copy, PartialEq, Eq)]
@@ -176,6 +178,7 @@ impl Handler {
             outbound: None,
             inbound: None,
             state: State::Active,
+            failure_delay: None,
         }
     }
 
@@ -271,6 +274,12 @@ impl ConnectionHandler for Handler {
         }
 
         loop {
+            if let Some(delay) = self.failure_delay.as_mut() {
+                if delay.poll_unpin(cx).is_pending() {
+                    return Poll::Pending;
+                }
+            }
+
             // Check for outbound ping failures.
             if let Some(error) = self.pending_errors.pop_back() {
                 log::debug!("Ping failure: {:?}", error);
@@ -283,10 +292,13 @@ impl ConnectionHandler for Handler {
                 // that use a single substream, since every successful ping
                 // resets `failures` to `0`.
                 if self.failures > 1 {
+                    self.failure_delay = Some(Delay::new(Duration::from_secs(5)));
                     return Poll::Ready(ConnectionHandlerEvent::NotifyBehaviour(Err(error)));
                 }
             }
 
+            self.failure_delay = None;
+
             // Continue outbound pings.
             match self.outbound.take() {
                 Some(OutboundState::Ping(mut ping)) => match ping.poll_unpin(cx) {

(duration is just an example, could be set to a second, which should be more than enough, but it might be better to move self.failure_delay = Some(Delay::new(Duration::from_secs(5))); to when it actually errors in the outbound state)

I still do feel that if the failure rate do continue to increase that the handler should be made inactive since it makes no since for it to continue to open a stream if all that will return is an error after so many tries. Would there be any objections against such an implementation? Eg if the rate reaches 20 to set it to be inactive. Should be more than enough

@thomaseizinger
Copy link
Contributor

Thanks for the suggestion. Mind opening that as a PR? The thing I'd change is that instead of introducing another delay, we can simply poll the same interval we already have also in the None case. That should have the same effect without less state that we need to keep.

Regarding the max failures I am a bit unsure:

  • If the other node simply doesn't support the ping protocol, we will already disable ourselves.
  • If the other node does support ping but fails to handle the stream properly, I'd see that as a bug on their end and not something we should cater for.

The idea of the ping protocol is that it is extremely simple. If a node cannot even handle that, the connection is likely broken and you can / should close it. Whether you do that after the first, second or 20th error is up to you but I don't see why we should add an extra code path just for the ping protocol to be disabled.

With the above bug-fix, we should no longer as easily exhaust the resource limit of other nodes.

@dariusc93
Copy link
Member Author

I went on and made a PR (though I will go back shortly and change it to use the interval there), but will leave out the snippet regarding the state change after reaching a specific failure rate since the code above should prevent the high cpu usage.

While I can agree that a connection could likely be broken and should be close in this case, I have also seen other protocols like gossipsub operate on the same connection too so its hard to really say.

@mergify mergify bot closed this as completed in #4423 Sep 9, 2023
mergify bot pushed a commit that referenced this issue Sep 9, 2023
This PR adds a delay to ping connection handler after exceeding the failure rate to prevent opening a outbound stream right after an error.

Resolves #4410.

Pull-Request: #4423.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants