-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable termination of 'outdated' long callbacks when page changes #2588
Comments
Very interested in this patch as well |
Hello @JamesKunstle, Please note my response here: https://community.plotly.com/t/cancelling-outdated-background-callbacks-when-switching-pages/76765 |
When trying to use this:
Ran into an issue with duplicate outputs on the ids, this is most likely due to the cancel being listed in a callback, should probably alter these to allow for duplicate outputs. |
Here's a direct reference to the code where this change could be made: |
Spot on @JamesKunstle @BSd3v - I think (a) the cancel callback should always allow duplicates since it's not actually doing anything with that output, and (b) we should explore a dedicated feature that maybe even by default cancels background callbacks on page changes, since it seems like generally the output would be discarded in that case anyhow. It would need a way to opt out of course, or we make the feature opt-in if we think this would constitute a breaking change. @T4rk1n thoughts? |
We should add |
Found more issues when trying to fix:
|
@alexcjohnson I tried combining all the cancel inputs into a single callback, there is a validation error:
That seems like a duplicate of another validation we have that takes the validation_layout and is configurable with |
If there are bg callbacks on separate pages (or in whatever way not all in the DOM simultaneously) then we'd have some inputs present and others missing, right? That's not going to work, unless we make optional inputs, something we've talked about at various points in the past. But I guess due to the |
Yes, that is what I was looking to do, turn out to be a pretty heavy refactor. Maybe one last thing I can try is isolate all the callback inputs in single callbacks for each input, I think that might work. |
Create a single cancel callback for each unique input.
@T4rk1n @BSd3v I saw that a PR was merged for this, allowing for the 'cancel' Input callback to be reused for multiple background callbacks. Wanted to check in on whether this issue should also be closed, or whether something else needs to be finished first? Is it reasonable to expect this patch to be in the next major release? |
@JamesKunstle It was a partial fix, I think there is still value in adding auto cancellation of running tasks when leaving the page so we may leave this issue open. What do you think @alexcjohnson ?
Yes, it's going in next release. |
@T4rk1n That's awesome, thank you so much for working on this, I really appreciate it as a community member. This has been a performance blocker for us (we're relatively resource-limited so overhead, especially superfluous overhead, is blocking). Having cancellation happen at the dash/dash-renderer level would be ideal, especially given that the dash-renderer knows which pages it's on, and which page it WAS on, and which promised components haven't finished. If I can be of any help for anything else, please let me know. Otherwise I'll just be enthusiastically following this issue. |
@alexcjohnson @T4rk1n Following up on this issue- curious if there has been any further discussion on this? |
@T4rk1n is this one still relevant? |
Is your feature request related to a problem? Please describe.
Long callbacks run for all Figures on all of my application's pages. When I switch between pages quickly the tasks in the backend Celery queue execute in-order regardless of old requests being outdated.
Ex: If I'm on page 1, and I navigate to page 2, then quickly to page 3 then page 4, the long callbacks for figures corresponding to pages 2, then 3, then 4 will run in order, slowing the delivery of Figures pertinent to page 4.
However, this isn't the case if I refresh a single page multiple times. If I'm on page 2 and I refresh the page 5 times, the long callbacks for the Figures associated with page 2 don't execute 5 times in order- the first 4 'render requests' are 'revoked' in the Celery queue, with the final request completing.
Describe the solution you'd like
I'd like a page switch to trigger the termination of newly 'outdated' long callbacks in the backend.
Describe alternatives you've considered
In the universe of Dash objects, I considered implementing a 'Sentry' that watches the URL and triggers a 'cancel' function that's bound to all long callbacks, cancelling them via the 'cancelable' interface of callbacks. In the Dash execution graph, however, I don't think I can guarantee that this callback would always precede the scheduling of new, up-to-date callbacks, so I don't think this is a reasonable solution.
Additional context
Here's an idea for a patch:
The dash_renderer frontend is aware of when it's going to try to execute a callback for a job that is still running. If a to-be-called callback's output list matches the output list of the job that the frontend is waiting for, it issues an 'oldJob' ID to the backend via the request headers. Source
In the backend, receiving these 'oldJob' ID's triggers that job's termination in the Celery backend. Source
It's evident that the frontend is doing job bookkeeping, tracking the 'output' list of jobs that have been scheduled in the backend but that haven't returned for the frontend. If the frontend could also track the page that the job is intended for, and compare that to the window.location.pathname when the job cleanup is already happening, the 'oldJob' param could be set and the backend could clean up any currently running long callbacks for the previous page.
A parameter could be set in the Dash python object to enable and disable this feature- it'd generally NOT be mutually exclusive of memoization because cancellations wouldn't always happen but it'd be more conservative, and memoization wouldn't always have an opportunity to happen.
The text was updated successfully, but these errors were encountered: