-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] POC for handling large results. #4838
Conversation
@ScrapCodes: Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
5e84d54
to
998165a
Compare
998165a
to
eb22fa9
Compare
The following is the coverage report on the affected files.
|
…it of termination message. # What it does: 1. When results are larger than certain threshold, so push it to a remote storage. 2. Instead of storing large contents into result (i.e. termination message, we just swap it with a remote COS url). 3. When results are referred to by params, or in a command. In the entrypointer script, we do a prefetch of the contents and replace the link(starting with cos:// ) with the filesname containing the contents of the result /or the actual contents(TBD since results are large). A example is provided: examples/v1beta1/pipelineruns/pipelinerun-large-results.yaml The poc does not do the following, inorder to keep the code change minimum and easily comprehensible. 1. make this feature an opt in. 2. Uses a string cos:// as a marker for processing it as links. This can be improved by either using a new result type 3. It uses a cloud object storage directly as a remote store. Ideally, it should support a CSI and then a user will be able to plugin storage of their choice. 4. The size threshold is not yet configurable. 5. Misses out on integration tests. This is actually a TODO ( if the community like the overall approach.)
eb22fa9
to
b403fd2
Compare
@ScrapCodes: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
1 similar comment
@ScrapCodes: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Thanks @dibyom for pointing me at this today! cc @chitrangpatel @jerop in case you haven't seen this yet - I think this approach addresses some of the permissions concerns with the log based solution we were talking about in tektoncd/community#745. This is using OIDC from the entrypoint flow (similar to what I mentioned), though it's currently interfacing with an external IdP rather than the in-cluster IdP. @ScrapCodes great start! Main piece of feedback I have is having the user pod entrypoint interface directly with external OIDC storage buckets is going to make it hard to ensure that every service account has the access and permissions it needs, and doesn't have broad permissions to alter other Task results it shouldn't have access to. If you haven't seen it already, you might be interested in some of the ideas detailed in TEP-0086 around the dedicated HTTP API - we could auth from the entrypoint with pod OIDC creds via service account token projection like you have here to a Tekton-managed API. That API can verify the JWT matches the correct Task pod, then handle the storage and any additional auth. Creds can be centralized in the Tekton control plane (so no need to worry about granting user pods access to external resources), and the API could be swapped out for different impl for external storage like this is doing, or updating the TaskRun object directly similar to what @chitrangpatel is doing with the log based proof of concept. |
Thank you @wlynch, I agree with auth part, it is not perfect. In order to confirm, that I have understood correctly.
Will we be limited to supporting only http transport based stores(e.g. COS) as storages. Do you think that is acceptable? On the brighter side, It does not need a sidecar ! |
The only bit that would need to be HTTP (though it could also be another transport mechanism like protobuf, etc. - it doesn't really matter as long as the API is agreed upon) would be from the user pod to the tekton controlled storage API. The tekton controlled storage API server can then interface with whatever backing storage it wants once it's authorized the user pod - COS, GCS, it's own local volume, etc. If an external service supported the same result storage API and federated OIDC creds from the cluster, then you could skip the internal API and go straight to the external service. |
@ScrapCodes: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
A POC for supporting results with values larger than the capacity limit of termination message.
What it does:
A example is provided:
examples/v1beta1/pipelineruns/pipelinerun-large-results.yaml
The poc does not do the following, inorder to keep the code change minimum and easily comprehensible.