Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Proposal: Simultaneous outgoing request limit #215

Closed
hannahhoward opened this issue Sep 14, 2021 · 9 comments
Closed

Feature Proposal: Simultaneous outgoing request limit #215

hannahhoward opened this issue Sep 14, 2021 · 9 comments
Labels
effort/days Estimated to take multiple days, but less than a week exp/expert Having worked on the specific codebase is important P0 Critical: Tackled by core team ASAP

Comments

@hannahhoward
Copy link
Collaborator

What

Add an option to limit the number of in progress outgoing requests.

Why

Currently, we have a limit on the number of incoming requests processed at the same time in the Responder implementation in Graphsync.

However, there are several cases where we'd like to limit the number of outgoing requests as well. It certainly makes sense as a protocol implementation to have these controls.

How

MaxInProgressRequests is really MaxInProgressIncomingRequests and will be renamed as such.

We will add MaxInProgressOutgoingRequests which will have the same effect for the requestor side.

We will also implement the requestor side with a PeerTaskQueue, meaning that requests will be balanced between peers, so that one slow peer does not cause congestion in the queue.

For discussion

This implementation will support solutions for filecoin-project/lotus#7276. It will allow us to implement option 4 (N in progress simultaneous Storage Deals and M in progress simultaneous Retrieval Deals separately.) for Miners. It is NOT a complete solution to the problem -- it sounds like there needs to some significantly more robust solutions that control the overall flow between transfer and sealing. However, it will be a good start for now.

Note that we are NOT implementing a single global transfer limit for now that encompasses both incoming and outgoing requests.

@hannahhoward hannahhoward added the need/triage Needs initial labeling and prioritization label Sep 14, 2021
@jennijuju
Copy link

jennijuju commented Sep 15, 2021

Lgtm!
One quick q, how would rate limiting base on size(for example bytes/second to transfer) work with this? Just wanna make sure we are designing something that can be extended for this need (which is another high requested feature)

@hannahhoward
Copy link
Collaborator Author

@jennijuju this is a fixed "number of requests in progress" limit. However, it doesn't inhibit doing something more complex like size/seconds to transfer. I am not sure that kind of limiting belongs in graphsync though -- it may need to go at a higher level like go-fil-markets. @raulk may have more perspective.

@jennijuju
Copy link

@jennijuju this is a fixed "number of requests in progress" limit. However, it doesn't inhibit doing something more complex like size/seconds to transfer. I am not sure that kind of limiting belongs in graphsync though -- it may need to go at a higher level like go-fil-markets. @raulk may have more perspective.

makes sense!

@dirkmc
Copy link
Collaborator

dirkmc commented Sep 16, 2021

An absolute limit on the number of simultaneous requests is the simplest way to solve this problem, and I think it's worth implementing 👍

In practice the connection to each peer will have varying bandwidth and latency. A future, more sophisticated rate limiter could

  • have a "soft" and "hard" maximum number of ongoing requests
  • take into account the rate at which blocks are arriving from each peer
  • process blocks round-robin
  • open requests to more peers until the "soft" limit is reached
  • if a peer with an ongoing transfer's buffer is empty, allow a request to another peer, until the "hard" limit is reached

For example:

  • soft limit: 2
  • hard limit: 3
  • ongoing requests to
    • Peer A: one block queued for processing
    • Peer B: two blocks queued for processing
  • queued requests to Peer C & Peer D (ie requests have not been sent yet)
ONGOING Peer A *
ONGOING Peer B * *
QUEUED  Peer C
QUEUED  Peer D

Once the block for Peer A has been processed we can open a request to Peer C

ONGOING Peer A
ONGOING Peer B * *
ONGOING Peer C
QUEUED  Peer D

etc...

@hannahhoward
Copy link
Collaborator Author

going to move forward on this soon.

@raulk
Copy link
Member

raulk commented Sep 16, 2021

Sounds like a way forward.

Out of curiosity, @jennijuju, did we survey providers and find that they'd want a global limit instead of separate inbound and outbound limits?

@hannahhoward
Copy link
Collaborator Author

@raulk I am almost done implementing this, and the transition to unified combined limit should it be needed is possible.

@hannahhoward
Copy link
Collaborator Author

It looked like initial miner sentiment was for seperate configuration

@jennijuju
Copy link

Sounds like a way forward.

Out of curiosity, @jennijuju, did we survey providers and find that they'd want a global limit instead of separate inbound and outbound limits?

we did and they want separate settings.

@hannahhoward hannahhoward added effort/days Estimated to take multiple days, but less than a week exp/expert Having worked on the specific codebase is important P0 Critical: Tackled by core team ASAP and removed need/triage Needs initial labeling and prioritization labels Sep 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
effort/days Estimated to take multiple days, but less than a week exp/expert Having worked on the specific codebase is important P0 Critical: Tackled by core team ASAP
Projects
None yet
Development

No branches or pull requests

4 participants