-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set TTL for argo workflow #1708
Comments
Are you suggesting a default ttl for all pipelines? pipelines/sdk/python/kfp/dsl/_pipeline.py Line 84 in 3dc73ba
|
Is the purpose to reduce the storage size? in which case we can set the default value to a large number to prevent large pipelines from failing due to small ttl. |
The TTL seems to kick off after the workload completes, so we don't need to worry about long running workflows. I think we should consider setting a low TTL by default to GC the resources promptly after the run is recorded in metadata. It would probably affect the logging UX experience and we will need to switch over to showing the persisted logs (stackdriver, etc) instead by default. |
With change #1802 this issue can be closed. |
Agreed with @paveldournov we need find a better way to persist logs with aggressive GC. |
File an issue for on-prem logs #1803 |
@IronPan: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Argo supports ttl https://github.com/argoproj/argo/blob/master/examples/gc-ttl.yaml
When DSL compiles pipeline to argo workflow, we should set this so that it can be GCed after finish. The TTL should leave enough time for Persistence agent to persist the data.
The text was updated successfully, but these errors were encountered: