monkeypatch kubernetes to avoid ThreadPool problems #169
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
kubernetes creates client objects in a number of places e.g. every time a Watch is instantiated.
These objects create a max-size thread pool for async requests, which bogs down the process if enough client objects are instantiated (#165). Since we don't
use async requests, these pool objects go totally unused.
We setup shared clients to reduce the number of client objects created in #128, but there are some places we can't prevent clients from being instantiated (Watches), so the new EventReflector caused a problem in a big way by instantiating N-CPUs threads for every spawn.
To solve this at the root, api_client is monkeypatched to avoid creating these ThreadPools in the first place. A patch has been submitted upstream to swagger-codegen, which is responsible for creating the ThreadPool in the kubernetes client.
Related to #153, which creates a Watch per spawn, so we cannot avoid creating a very large number of ApiClient objects and thereby threadpools without monkeypatching Kubernetes.
The alternative is to go back to pinning kubernetes-3.0.