Skip to content
This repository has been archived by the owner on Jan 23, 2023. It is now read-only.

lep: threadpool handle #4

Merged
merged 1 commit into from
Sep 19, 2016
Merged

lep: threadpool handle #4

merged 1 commit into from
Sep 19, 2016

Conversation

saghul
Copy link
Member

@saghul saghul commented Nov 27, 2014

No description provided.

@bnoordhuis
Copy link
Member

I've been thinking about this and I'm not sure it's the right approach. For one, uv_tpool_init() won't cover all possible use cases that users have. For example, there is no way to express that some requests should have a higher priority than others. (Granted, libuv does not have such functionality anyway but see below.)

I've been working on an approach where the user can plug in their own thread pool at start-up. You provide one or two callbacks, and libuv, instead of creating its own thread pool, just hands off everything to the user callback. It would be ideal for node.js because that means we can share the V8 thread pool (what V8 uses for background compilation and such) with libuv.

Circling back to the priority example I mentioned above, that becomes trivial to implement: the user callback inspects the work request, shuffles it into the right work queue and done (and we didn't have to lift a finger to make it work.)

How it should interact with uv_cancel() is somewhat of an open question. My patches pass a flag to the user callback, requesting it to try and cancel the request. It works but it's kind of icky, there's probably a nicer way to do it.

@saghul
Copy link
Member Author

saghul commented Nov 28, 2014

I've been thinking about this and I'm not sure it's the right approach. For one, uv_tpool_init() won't cover all possible use cases that users have. For example, there is no way to express that some requests should have a higher priority than others. (Granted, libuv does not have such functionality anyway but see below.)

Shouldn't that be decided when the work is queued and not in init?

I've been working on an approach where the user can plug in their own thread pool at start-up. You provide one or two callbacks, and libuv, instead of creating its own thread pool, just hands off everything to the user callback. It would be ideal for node.js because that means we can share the V8 thread pool (what V8 uses for background compilation and such) with libuv.

Circling back to the priority example I mentioned above, that becomes trivial to implement: the user callback inspects the work request, shuffles it into the right work queue and done (and we didn't have to lift a finger to make it work.)

Hum, that's indeed a good match for node. My idea is to have a handle that does that kind of work, and have the user explicitly decide on which handle it wants to do the work. Maybe there is a way for both ideas to work, even though I haven't thought of it all the way through: have some uv_executor_t base handle thing, and some uv_executor_queue_work function which does what you expect. Then uv_tpool_t would be a "subclass" of uv_executor_t. Then, we'd have some uv_custom_executor_t which gets some callbacks and provides some helper functions for pluging in some other mechanism for doing the work. Thoughts?

How it should interact with uv_cancel() is somewhat of an open question. My patches pass a flag to the user callback, requesting it to try and cancel the request. It works but it's kind of icky, there's probably a nicer way to do it.

Do you have these patches somewhere so I can check how the interaction with V8 would work and maybe try to propose something more complete that what I mentioned above?

@bminer
Copy link

bminer commented Dec 3, 2014

I appreciate that there are developers like you guys out there trying to find good solutions to these problems. Mixing uv_getaddrinfo and uv_fs_* in the same thread pool just creates headaches in certain situations. Just ask me for examples. ;)

@beevik
Copy link

beevik commented Mar 9, 2015

+1 on the user-supplied threadpool proposal.

Setting up a threadpool might be considered a bit of an advanced thing to do in order to use libuv, so I assume the default libuv initialization will still rely on libuv-supplied threadpool(s). If so, will the Windows version unify the Windows API threadpool and libuv-internal threadpool?

@saghul
Copy link
Member Author

saghul commented Mar 10, 2015

Setting up a threadpool might be considered a bit of an advanced thing to do in order to use libuv, so I assume the default libuv initialization will still rely on libuv-supplied threadpool(s).

Actually no. The idea is that the user creatres a threadpool handle and it just uses it when calling some function which runs in the threadpool, see the uv_getaddrinfo example. The thread management is done internally.

If so, will the Windows version unify the Windows API threadpool and libuv-internal threadpool?

This LEP doesn't contemplate that, and right now I'm not sure it could be done because those threads are internal. FWIW I'm not even sure we do any allocations in those threads, which are only used in some special cases when emulating IOCP on pipes.

@saper
Copy link

saper commented Apr 16, 2015

I am not sure priorities can handle the situation where two different families of work queue clients are co-dependent in some way. (My case is described here sass/node-sass#857 (comment) - collision of custom queue jobs with the filesystem API).

What would really help (I think that would be a TrueNodeWay) is ability to tell queue runner periodically to allow other queue items to run (kind of process.nexttick, maybe). Running under node I cannot talk to V8 from the threadpool worker thread, but it should be possible to talk to the queue runner.

Instead of waiting on a lock in https://github.com/sass/node-sass/blob/master/src/callback_bridge.h#L99 I could tell the queue manager "Please let other callbacks run in the meantime".

Even nicer and more flexible solution (maybe out of scope of this PR) would be to implement coroutines in libuv. Something like a lightweight pipe between wq threads. Or even real coroutines using libcoro. So I can postpone one work item and switch to another, keeping thread context saved in the queue. uv_work_yield() anyone? :)

@NatalieWolfe
Copy link

Is there any movement on this proposal? I would like to see libuv's thread pooling... adjusted, and would like to create a counter-proposal that follows closer to @bnoordhuis's suggestion if this one has gone as stale as it appears.

@saghul
Copy link
Member Author

saghul commented Dec 16, 2015

@NatalieWolfe there are a number of patches/issues/proposals lingering around about the thread-pool. I have plans to open some "meta-issue" and discuss the state of affairs soon, will @ you there. This proposal and what Ben proposed are not necessarily orthogonal, it does need some updating though.

@trevnorris
Copy link
Member

@bnoordhuis

For example, there is no way to express that some requests should have a higher priority than others. (Granted, libuv does not have such functionality anyway but see below.)

This sounds a lot like GCD and its dispatch queue priority (https://developer.apple.com/library/ios/documentation/Performance/Reference/GCD_libdispatch_Ref/#//apple_ref/doc/constant_group/dispatch_queue_priority_t).

@bminer
Copy link

bminer commented Mar 28, 2016

Any update on this?

@saghul
Copy link
Member Author

saghul commented Mar 29, 2016

Sorry, no updates yet.

@saghul saghul merged commit 8a71ce7 into libuv:master Sep 19, 2016
@saghul saghul deleted the threadpool_handle branch September 19, 2016 23:06
@saghul
Copy link
Member Author

saghul commented Sep 19, 2016

REJECTED: The libuv core team (part of it) gathered together at Node Interactive Amsterdam 2016 and decided
to withdraw this course of action and focus on a pluggable API for threaded operations. A new
LEP will be written about it.

@bminer
Copy link

bminer commented Sep 23, 2016

@saghul - Could you provide more info when it becomes available? Thanks!

@saghul
Copy link
Member Author

saghul commented Sep 23, 2016

Sure, I'll drop a note here. Cheers!

@bminer
Copy link

bminer commented Apr 6, 2017

Any updates on threads and uv_queue_work? Another problem in Node is hanging threads from user code that prevent Node from terminating (threads are usually blocked by an I/O system call, i.e. read).

What is the consensus on the path forward?

@davisjam
Copy link

@bminer I have resumed working on a PR for a pluggable threadpool. Follow at #1726. I'm currently in the process of remembering things after summer break.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants