-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve DAG validation for pipelines with hundreds of tasks #5421
Conversation
Hi @rafalbigaj. Thanks for your PR. I'm waiting for a tektoncd member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
if len(deps[dep]) == 0 { | ||
independentTasks.Insert(dep) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can see it's an optimization originating from the fact that no topo-sort order is actually built or printed (as "independent" tasks that are also "final" ones --- are not even put into the independentTasks
set).
@@ -549,6 +550,78 @@ func TestBuild_InvalidDAG(t *testing.T) { | |||
} | |||
} | |||
|
|||
func TestBuildGraphWithHundredsOfTasks_Success(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe some performance check too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes please, so that we do not run into it in future, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with performance checks is that they could be inherently flaky because of the variable performance of the test nodes. If we do add performance checks, I would suggest for now to only log the execution time.
We can collect such timings over time for some time and then decide on an acceptable bar for the execution time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a great idea @afrittoli, where do we store those timings?
I was running the test you had introduced #3524 (a huge thanks to you for introducing such test):
func buildPipelineStateWithLargeDepencyGraph(t *testing.T) PipelineRunState { |
I ran it locally yesterday multiple times which took 60 seconds 😲 (without this PR).
At the time introducing this test, it took less than the default timeout 30 seconds based on the PR description (if I am reading it right):
This changes adds a unit test that reproduces the issue in https://github.com/tektoncd/pipeline/issues/3521, which
used to fail (with timeout 30s) and now succeedes for pipelines roughly
up to 120 tasks / 120 links. On my laptop, going beyond 120 tasks/links
takes longer than 30s, so I left the unit test at 80 to avoid
introducing a flaky test in CI. There is still work to do to improve
this further, some profiling / tracing work might help.
There has been many changes introduced since then, we really need a way to flag us (nightly performance tests) when we introduce any delay.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nightly performance test or like you are suggesting to collect timings over time. I am fine with either option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. The TestBuildGraphWithHundredsOfTasks_Success
in a way is a kind of performance test (like the one I added at the time), because if things slow down significantly, the tests will eventually timeout.
If we collected test execution times and graphed them over time - or if we had some dedicated nightly performance test, we would be able to see a change in execution time sooner than just waiting for the tests to timeout.
That is something that we would need to setup as part of the CI infrastructure. Would you like to create an issue about that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup definitely, we had a PR from @guillaumerose #4378 which didn't materialize but at least most of us including @vdemeester and @imjasonh were on board with the idea of running nightly performance tests.
/ok-to-test |
The following is the coverage report on the affected files.
|
The following is the coverage report on the affected files.
|
The following is the coverage report on the affected files.
|
The following is the coverage report on the affected files.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks ok to me now.
@@ -0,0 +1,21 @@ | |||
/* | |||
Copyright 2019 The Tekton Authors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2022?
Thanks @rafalbigaj for this! Do you have any number about the improvement in performance with the new algorithm? |
DAG validation rewritten using Kahn's algorithm to find cycles in task dependencies. Original implementation, as pointed at tektoncd#5420 is a root cause of poor validation webhook performance, which fails on default timeout (10s).
@afrittoli let me provide as an example results from
Benchmarked on: 2,3 GHz 8-Core Intel Core i9; 16 GB 2400 MHz DDR4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @rafalbigaj for this. The code and test coverage looks good to me.
Just one possible NIT, but nothing blocking for this PR.
/approve
|
||
// exports for tests | ||
|
||
var FindCyclesInDependencies = findCyclesInDependencies |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have a policy of testing exported functions only (generally), but I think in this case it makes sense to test findCyclesInDependencies
directly!
NIT: I wonder if instead of exporting for tests, we could have the test for findCyclesInDependencies
in a dedicated test module in the dag
package?
Not asking to change this yet, I'd like to see what others think - and we could also change this in a different PR in case.
@tektoncd/core-maintainers wdyt?
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: afrittoli, Udiknedormin The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -163,9 +200,7 @@ func addLink(pt string, previousTask string, nodes map[string]*Node) error { | |||
return fmt.Errorf("task %s depends on %s but %s wasn't present in Pipeline", pt, previousTask, previousTask) | |||
} | |||
next := nodes[pt] | |||
if err := linkPipelineTasks(prev, next); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had troubleshooted till this point, thanks for taking it further, appreciate your efforts 🙏
@@ -163,9 +200,7 @@ func addLink(pt string, previousTask string, nodes map[string]*Node) error { | |||
return fmt.Errorf("task %s depends on %s but %s wasn't present in Pipeline", pt, previousTask, previousTask) | |||
} | |||
next := nodes[pt] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is holding a huge struct (object) in real world applications. It is set to the entire pipelineTask
specification along with a list of params
, when
expressions, the entire specification when taskSpec
is specified, etc. The pipelineTask
specification is not required at this point since all the dependencies (runAfter
, task results in params and when expressions, and from) are calculated before calling dag.Build
. All we need is just the pipelineTask.Name
i.e. HashKey()
. We have a room of improvement in future to avoid passing the entire blob around.
@tektoncd/core-maintainers I added the "needs-cherry-pick" label for this item. While there was no explicit functional regression, there has been a performance degradation over time already since v0.36.x, which is significantly eased (solved) by this PR. I would propose doing a new series of minor releases to include this. |
/lgtm |
/cherrypick release-v0.36.x |
@afrittoli: #5421 failed to apply on top of branch "release-v0.36.x":
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherrypick release-v0.37.x |
@afrittoli: new pull request created: #5430 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherrypick release-v0.38.x |
/cherrypick release-v0.39.x |
@afrittoli: new pull request created: #5431 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@afrittoli: new pull request created: #5432 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
DAG validation rewritten using Kahn's algorithm to find cycles in task dependencies.
Original implementation, as pointed at #5420 is a root cause of poor validation webhook performance, which fails on default timeout (10s).
Changes
Submitter Checklist
As the author of this PR, please check off the items in this checklist:
functionality, content, code)
/kind <type>
. Valid types are bug, cleanup, design, documentation, feature, flake, misc, question, tep/kind bug
Release Notes