-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi_channel: allow more than one instance per program with different configurations #50
Conversation
Signed-off-by: Edwin Török <edvin.torok@citrix.com>
There can be multiple multi_channel (or indeed pool) instances active at the same time, each with a different configuration. We cannot necessarily safely reuse any IDs issued by one channel on another channel, so ensure that we use a unique key per channel by allocating it together with the channel. Signed-off-by: Edwin Török <edvin.torok@citrix.com>
@ctk21 is the right person to look at this one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was no special reason for the channel domain state to be global. Indeed it is reasonable and an improvement to allow multiple pools!
The only concern that comes to mind is if we need to be stronger about preventing tasks being accidentally used between two pools. For example Domain A1 is in pool A but does an await on a task executing in pool B, that could be a source of bugs in the absence of an argument for why it works.
if dls_state.id >= 0 then dls_state | ||
let dls_state = Domain.DLS.get mchan.dls_key in | ||
if dls_state.id >= 0 then begin | ||
assert (dls_state.id < Array.length mchan.channels); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The assert is going to execute every time which is sad, but I guess it can stay if you are strongly in favour.
On 11 October 2021 11:44:19 BST, Tom Kelly ***@***.***> wrote:
***@***.*** commented on this pull request.
There was no special reason for the channel domain state to be global. Indeed it is reasonable and an improvement to allow multiple pools!
The only concern that comes to mind is if we need to be stronger about preventing tasks being accidentally used between two pools. For example Domain A1 is in pool A but does an await on a task executing in pool B, that could be a source of bugs in the absence of an argument for why it works.
> @@ -76,13 +78,16 @@ let init_domain_state mchan dls_state =
[@@inline never]
let get_local_state mchan =
- let dls_state = Domain.DLS.get dls_key in
- if dls_state.id >= 0 then dls_state
+ let dls_state = Domain.DLS.get mchan.dls_key in
+ if dls_state.id >= 0 then begin
+ assert (dls_state.id < Array.length mchan.channels);
The assert is going to execute every time which is sad, but I guess it can stay if you are strongly in favour.
Hi,
I thought about making the pool part of the promise, but I'm worried with multiple pools you can construct a deadlock scenario if you end up doing a Task.await while inside the async of another pool.
Needs.more thought on how to handle this situation (or at least detect the deadlock, like the Mutex module sometimes can).
…--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
|
Signed-off-by: Sora Morimoto <sora@morimoto.io>
#51 I'll rebase my PR once this is in, and think about what needs to be done to run multiple pools safely. |
@Sudha247 Perhaps this is relevant to the |
…aml-cache Use a random number as the cache prefix to disable cache in CI
…_4.12+domains+effects_as_cache_key use last 4.12+domains+effects hash as the cache-key
Make domainslib build/run with OCaml 5.00 after PR #704
…dlers Utilise effect handlers
Hi @edwintorok, Jan has created a PR that fixes the conflicts to your branch #58 (comment). Would you prefer if Jan sends a PR to your |
The test relies on reading backtrace contents, so we need to ensure that backtraces are on (by default they'd be off). Signed-off-by: Edwin Török <edwin@etorok.net>
Thanks a lot @jmid, I've updated this PR to include your changes. |
LGTM. Merging. |
Multi_channel had shared global state in the form of
dls_new_key
which caused ids assigned by one multichannel to be used by another (possibly smaller channel), resulting in out-of-bounds array indexing.One possible way to fix this is to remove the global key, and use a per-channel key.
The key here also has some mutexes and conditional variables that you probably don't want shared across different multi-channels.
Draft PR, because there must've been a reason why this was a shared global variable to begin with.
Appears to fix #43 on my machine