-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add fiber queues for execution context schedulers #15345
Add fiber queues for execution context schedulers #15345
Conversation
Simple abstraction on top of a mutex and condition variable to synchronize the execution of a set of threads.
Isn't that a |
How much of the queues will be public interface that are hard to change later on? |
@BlobCodes Yes, it's a LIFO singly linked list (in contrast to the FIFO doubly linked list of @yxhuvud Good call, they should all be |
defe37d
to
7c67d27
Compare
Moved to |
The MinGW failure on CI should be fixed by #15404. |
Hm, that'll leave MinGW broken until we merge that fixup then? 🤔 |
That's possible, yes. We might want to split #15404:
|
Updated to add:
Let's avoid any further changes 🤞 |
Following the addition of Fiber::Stack in #15409, the |
bfe4756
to
edcb13f
Compare
Introduces 3 queues that will be used the ExecutionContext schedulers. They derivate from Go's internal queues (
q
,runq
andglobrunq
).Fiber::Queue
: holds an unbounded singly-linked list of Fiber with a bulk insert operation.Note: we may consider renaming
Fiber::Queue
asFiber::List
andFiber#schedlink
asFiber#list_next
?ExecutionContext::GlobalQueue
: wraps aFiber::Queue
with optional thread-safety and a bulk grab operation. There will be a single global queue per execution context shared among one or more schedulers.The point is to have an unbounded space to store as many fibers as needed, or to store cross context enqueues —the runnables queue below only supports a single producer.
ExecutionContext::Runnables
: a bounded, lock-free, chase-lev queue (single producer, multiple consumers) with bulk operations. There will be a one runnables queue per execution context scheduler (aka local queue).On overflow or underflow it pushes/grabs half the queue size to/from the global queue (locking the mutex once in a while).
Any scheduler can steal from any runnables queue in the execution context at any time, directly into their own local queue.
The point is to have a quick list to push/pop and steal from, and to limit the occurrences where we must lock the scheduler mutex to reach to the global queue.
Refs #15342