Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve DAG validation for pipelines with hundreds of tasks #5421
Improve DAG validation for pipelines with hundreds of tasks #5421
Changes from all commits
6895508
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can see it's an optimization originating from the fact that no topo-sort order is actually built or printed (as "independent" tasks that are also "final" ones --- are not even put into the
independentTasks
set).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is holding a huge struct (object) in real world applications. It is set to the entire
pipelineTask
specification along with a list ofparams
,when
expressions, the entire specification whentaskSpec
is specified, etc. ThepipelineTask
specification is not required at this point since all the dependencies (runAfter
, task results in params and when expressions, and from) are calculated before callingdag.Build
. All we need is just thepipelineTask.Name
i.e.HashKey()
. We have a room of improvement in future to avoid passing the entire blob around.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had troubleshooted till this point, thanks for taking it further, appreciate your efforts 🙏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe some performance check too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes please, so that we do not run into it in future, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with performance checks is that they could be inherently flaky because of the variable performance of the test nodes. If we do add performance checks, I would suggest for now to only log the execution time.
We can collect such timings over time for some time and then decide on an acceptable bar for the execution time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a great idea @afrittoli, where do we store those timings?
I was running the test you had introduced #3524 (a huge thanks to you for introducing such test):
pipeline/pkg/reconciler/pipelinerun/resources/pipelinerunstate_test.go
Line 1404 in 99404d5
I ran it locally yesterday multiple times which took 60 seconds 😲 (without this PR).
At the time introducing this test, it took less than the default timeout 30 seconds based on the PR description (if I am reading it right):
There has been many changes introduced since then, we really need a way to flag us (nightly performance tests) when we introduce any delay.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nightly performance test or like you are suggesting to collect timings over time. I am fine with either option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. The
TestBuildGraphWithHundredsOfTasks_Success
in a way is a kind of performance test (like the one I added at the time), because if things slow down significantly, the tests will eventually timeout.If we collected test execution times and graphed them over time - or if we had some dedicated nightly performance test, we would be able to see a change in execution time sooner than just waiting for the tests to timeout.
That is something that we would need to setup as part of the CI infrastructure. Would you like to create an issue about that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup definitely, we had a PR from @guillaumerose #4378 which didn't materialize but at least most of us including @vdemeester and @imjasonh were on board with the idea of running nightly performance tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❤️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have a policy of testing exported functions only (generally), but I think in this case it makes sense to test
findCyclesInDependencies
directly!NIT: I wonder if instead of exporting for tests, we could have the test for
findCyclesInDependencies
in a dedicated test module in thedag
package?Not asking to change this yet, I'd like to see what others think - and we could also change this in a different PR in case.
@tektoncd/core-maintainers wdyt?