-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
timeouts
is not working as expected.
#6137
Comments
timeouts
field not working as expected. timeouts
field is not working as expected.
timeouts
field is not working as expected. timeouts
is not working as expected.
thanks for filing @VeereshAradhya! The "tasks" timeout for a PipelineRun is for the cumulative time taken by all of its tasks. If you'd like to set a timeout for an individual task, you can use |
The pipeline that I used had only one task. In the below example, I have specified task timeout as 2hours (cumulative) so the taskrun should have timeout of 2hours right?
|
I tested the same thing on
|
/reopen |
@VeereshAradhya: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Yes, we changed the behavior of timeouts between the versions you've listed here. We previously applied the time remaining from timeouts.tasks as the timeout to each taskrun. Now, the PipelineRun controller will cancel any running tasks after timeouts.tasks has elapsed. (This helped fix a race condition plus issues with retried taskruns.) The tasks timeout isn't applied to the individual taskrun timeout (even if there's only one) but the taskRuns will still be canceled after the tasks timeout has elapsed. |
Oh this is actually confusing now.
Was there any deprecation or feature-flags here (like when we changed the |
The documentation could probably make this clearer that |
@vdemeester can you give more detail on what workloads were breaking due to this change? It was introduced in #5134 |
@lbernick On our downstream pipelines, we used to have a given amount for |
Not sure I understand-- I would expect the pipelinerun controller to time out the tasks after |
This is what @VeereshAradhya wrote above to be fair. With |
Ah I see, sorry I didn't quite understand this before! I doubt we'd want to revert these changes; is there anything you'd suggest doing to mitigate the impact? We could definitely edit the release notes to make the impact more clear. |
Oh yeah, no really asking to revert the change (yet ? 😈). It's more "for next time we need to be more careful". The "naming" is still a bit confusing to me (
👍🏼 |
@VeereshAradhya PTAL at #6171 and let me know if these updated docs are clearer! |
Also, release notes have been updated. |
Expected Behavior
When provided timeouts for pipeline and task in
timeout
section of pipelinerun, both pipelinerun and taskrun timeout should get updatedActual Behavior
When provided timeouts for pipeline and task in
timeouts
section of pipelinerun, only the pipelinerun timeout is updated and the taskrun gets created with default timeoutSteps to Reproduce the Problem
Additional Info
Kubernetes version:
Output of
kubectl version
:Tekton Pipeline version:
Output of
tkn version
orkubectl get pods -n tekton-pipelines -l app=tekton-pipelines-controller -o=jsonpath='{.items[0].metadata.labels.version}'
The text was updated successfully, but these errors were encountered: