-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(net): disconnect from inactive nodes if necessary #5924
feat(net): disconnect from inactive nodes if necessary #5924
Conversation
framework/src/main/java/org/tron/core/net/messagehandler/BlockMsgHandler.java
Outdated
Show resolved
Hide resolved
framework/src/main/java/org/tron/core/net/service/effective/ResilienceService.java
Outdated
Show resolved
Hide resolved
framework/src/main/java/org/tron/core/net/service/effective/ResilienceService.java
Outdated
Show resolved
Hide resolved
framework/src/main/java/org/tron/core/net/service/effective/ResilienceService.java
Outdated
Show resolved
Hide resolved
private static final int initialDelay = 300; | ||
private static final String esName = "resilience-service"; | ||
private final ScheduledExecutorService executor = ExecutorServiceManager | ||
.newSingleThreadScheduledExecutor(esName); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that you have added 3 tasks to be scheduled and executed in the same executor, the scheduling between different tasks will be affected, right? Have you considered splitting the tasks into different executors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any task only consume several milliseconds at most. There is no need to split them in different executors.
List<PeerConnection> peers = tronNetDelegate.getActivePeer().stream() | ||
.filter(peer -> !peer.isDisconnect()) | ||
.filter(peer -> now - peer.getLastActiveTime() >= inactiveThreshold) | ||
.filter(peer -> !peer.getChannel().isTrustPeer()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why change the original logic, not excluding active connections.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Original logic don't have the inactiveThreshold restrict, it has many conditates. The number of peers whose inactive time is more inactiveThreshold is very small, it has very little condidates, so there is no need to exclude active connections.
What does this PR do?
To maintain the stability of the network and prevent node isolation, the node needs to be disconnected from inactive nodes on a regular basis. Three strategies are adopted:
Why are these changes required?
This PR has been tested by:
Follow up
Extra details