-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Idle threads never terminated using default Hystrix thread pool #1242
Comments
@mattrjacobs So if I assign a coreSize of 100 to a command, it will assign itself all the 100 threads and these 100 threads can't be used by anyone. Is this what you are trying to say? Or this only means that the whatever number of threads have been used, they wont get terminated? This can be detrimental to the systems health. Can you suggest some work around? |
Yes, these threads get spun up and stay around. A couple points to make around that:
|
The response time in your case is around 0.2secs if I am not wrong. While I
|
What gives you reason to believe that the Hystrix threads are creating unnecessary load? |
Lets say I have a bare minimum of 10command groups each having pool size of
|
We have never found idle threads to be a source of load in our system. That's why I'm asking what reasons you have for suspecting they are a source of load in yours. |
I am thinking this because now these threads which are idle(once used by On Thu, Sep 15, 2016 at 10:30 AM, Matt Jacobs notifications@github.com
|
Could you let me know this Lets say I have a command group with core size of 100. Now at some point of On Thu, Sep 15, 2016 at 10:45 AM, mohan mishra mishramohan2@gmail.com
|
No. By design, these are only used for Hystrix run() methods and sit idle the remainder of the time |
Is there any plan in the future updates if any to terminate the used Sent with Mailtrack On Sun, Sep 18, 2016 at 8:36 PM, Matt Jacobs notifications@github.com
|
+1 Unfortunately, if one happens to use large |
@mohan-mishra Yes, this ticket tracks the work to do so. I haven't had any time to work on it yet. The description in the first comment describes the proposed implementation. For your maximum-concurrency question, there is a command-level method to return this value for a specific command: For a thread-pool, this is the method to use: |
@bltb Yeah, you're right. This is not pain we've felt acutely, as the production system we work on has fairly small thread pools (max around 25), and many other memory consumers that make the Hystrix thread usage fairly innocuous. I'll add this to the 1.5.6 milestone and get this cleaned up |
We have just ran into this issue today. We have a system that talks to many, many other systems. It is a common service that allows key events in our system to have a publish-subscribe model. We were using hystrix to protect ourselves from poorly behaved subscribers and we have ran into that model overwhelming the system with threads. For example with a 1000 subscribers, by default that will be 10,000 threads which is more than the max threads per process allowed at a system level. It would be nice to create thread pools with a core size of 1 (or 0) and have maxsize set to 5, and reap threads that have been idle for 60 seconds or something. Thanks! |
I think I have same issue (more may I have other leak), I have a service that proxify many request to a large number of other services. I created a hystrix threadpool based on From what I can see on That may lead to
@mattrjacobs do you think is possible to release thread with our own strategy using https://github.com/Netflix/Hystrix/wiki/Plugins? Or may I have to switch to |
@kakawait, whilst @mattrjacobs has already fixed this in master via #1371, if you really have (or need) many running threads, your actual issue may be due to an operating system |
Quick update on this issue: I'm going to add a piece of config which enables this feature and defaults it to false. I don't want any Hystrix users to get misconfigured threadpools when doing the upgrade to 1.5.7. Instead, by opting in, the user is then on the hook for verifying threadpool sizes are as they desire. When doing internal testing, I found that the custom |
@bltb I well understand that if I create many (too much maybe) Even if I put I was more talking about Btw my use case is edge case, I really think I will change my strategy to |
I just merged in #1399, which adds a property ( I'll update the docs around the new configuration options and then close this issue |
So if I've to keep core and max pool size different, I need to set Any ETA for 1.5.7 release with these changes? I don't see a due date on 1.5.7 milestone. |
@pmohankumar That's correct. I need to get a few other PRs reviewed, and hope to cut the 1.5.7 release early next week. Thanks for the patience |
I just added the new configuration options to the Configuration Wiki. Closing this issue now |
1.5.7 is released now |
Hystrix currently uses the 'coreSize' threadpool config for both
ThreadPoolExecutor.corePoolSize
andThreadPoolExecutor.maximumPoolSize
. The effect of this is that all Hystrix thread pools are fixed size. That also implies that the keepAlive config is only used when a plugin overrides the default Hystrix thread pool.This issue will investigate changing the default Hystrix thread pool to accept
coreSize
andmaximumSize
, and letting idle threads time out.See initial discussion in Hystrix Google Group: https://groups.google.com/forum/#!topic/hystrixoss/lT4CZgt-KRk
The text was updated successfully, but these errors were encountered: