You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a latency sensitive application typically answering requests in 15ms with a worst case deadline of 200ms. To reduce the latency, we have disabled the garbage collection, hence we use the max_requests option.
Unfortunately, this way we might have some down time in our system (basically when we restart some of the workers). To avoid this, we want to change the worker restarts, so that we first spawn a worker and after it's initialised, kill the previous one. In a way this is similar to #2196 but for the max_requests restarts. I was wondering whether something like that is planned as a feature or if there is some work around?
I've attached a plot of the requests in which you can see the spikes that happen from the worker restarts. We have 2 workers right now and the small spikes happen when 1 worker is being restarted, while the large ones happen when both of them restart at the same time.
The text was updated successfully, but these errors were encountered:
There is no way to spawn a worker before it exist in such case or it would require to lock on the arbiter. I would suggest rather to introduce another worker so you would reduce the number of chances 2 workers are killed at the same time and ensure you always have a spare worker to take the load when one is going off.
Hi!
We have a latency sensitive application typically answering requests in 15ms with a worst case deadline of 200ms. To reduce the latency, we have disabled the garbage collection, hence we use the
max_requests
option.Unfortunately, this way we might have some down time in our system (basically when we restart some of the workers). To avoid this, we want to change the worker restarts, so that we first spawn a worker and after it's initialised, kill the previous one. In a way this is similar to #2196 but for the max_requests restarts. I was wondering whether something like that is planned as a feature or if there is some work around?
I've attached a plot of the requests in which you can see the spikes that happen from the worker restarts. We have 2 workers right now and the small spikes happen when 1 worker is being restarted, while the large ones happen when both of them restart at the same time.
The text was updated successfully, but these errors were encountered: