-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(executor): Switch to use SDK and poll-based resource status checking #5364
feat(executor): Switch to use SDK and poll-based resource status checking #5364
Conversation
…king Signed-off-by: terrytangyuan <terrytangyuan@gmail.com>
Signed-off-by: terrytangyuan <terrytangyuan@gmail.com>
This reverts commit 9c7954d. Signed-off-by: terrytangyuan <terrytangyuan@gmail.com>
Signed-off-by: terrytangyuan <terrytangyuan@gmail.com>
@@ -90,6 +90,9 @@ func initExecutor() *executor.WorkflowExecutor { | |||
clientset, err := kubernetes.NewForConfig(config) | |||
checkErr(err) | |||
|
|||
restClient := clientset.RESTClient() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alexec We have a winner here! We can instantiate the RESTClient from the exising clientset without introducing new direct dependency (k8s.io/apiextensions-apiserver
was already in go.sum
partially due to this reason). The rest of the code remains the same.
This is also what kubectl
uses under the hood (should've looked into its implementation in the first place!).
Codecov Report
@@ Coverage Diff @@
## master #5364 +/- ##
==========================================
+ Coverage 11.60% 16.26% +4.65%
==========================================
Files 84 243 +159
Lines 32932 43657 +10725
==========================================
+ Hits 3821 7099 +3278
- Misses 28584 35586 +7002
- Partials 527 972 +445
Continue to review full report at Codecov.
|
I'm not sure we want to change from watch. Watch gives rapid feedback for short-running resources. It only transfers the data needed to do so. Instead, could we focus on making the watch code more robust? There are a few examples of using something like for {
select {
case <-ctx.Done():
return ctx.Err()
default event, ok <- w.ResultChan():
// other chechk
} In the code that can be used. |
That's one of the items I brought up for discussion in #4467.
In other words, maintaining watches for custom resources is not very controllable, especially when using with executors like Here's the reply from @jessesuen in that thread:
So as long as we are able to control the interval we should be fine. We've experimented this at very large scale and this more controllable approach works well for us. |
See discussions in #4467. Main changes are:
kubectl
so it's more configurable and friendly to monitoring tools.kubectl
) so the code is more concise and maintainable.cc @alexec @jessesuen
Signed-off-by: terrytangyuan terrytangyuan@gmail.com
Checklist: