-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propagate labels from PipelineRun to TaskRun to Pods #488
Propagate labels from PipelineRun to TaskRun to Pods #488
Conversation
Previously, labels were propagated from TaskRun to Build, but not from Build to Pod. This PR fixes that, and also propagates labels from PipelineRun to TaskRun, so that users are more easily able to filter, identify, and query Pods created by Build Pipeline. In all cases, if a user specifies a label whose key overlaps with one of the labels that we set automatically, the user's label is ignored. As mentioned in taskrun_test.go, the current tests do not fully demonstrate that the TaskRun to Pod propagation works correctly since they use Build as an intermediate form for comparison. Fixes tektoncd#458
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed (or fixed any issues), please reply here (e.g. What to do if you already signed the CLAIndividual signers
Corporate signers
|
Working on the CLA 😄 |
wantBuildSpec buildv1alpha1.BuildSpec | ||
name string | ||
taskRun *v1alpha1.TaskRun | ||
wantBuild *buildv1alpha1.Build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the reason for most of the changes in this file. I could isolate the specific test I added to avoid most of the changes if desired, or go even farther and move to using a Pod builder as discussed in the TODO I added:
// TODO: Using MakePod means that a diff will not catch issues
// specific to the Build to Pod translation (e.g. if labels are
// not propagated in MakePod). To avoid this issue we should create
// a builder for Pods and use that instead.
Any thoughts/feedback would be appreciated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, I made a similar change in pr #491 to not use MakePod
and just put down the expected Containers specs which is a bigger change, but I had to validate the container's names
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that's exactly what I had in mind, these 2 PRs will definitely have a ton of conflicts with each other in that file. I think it would be good to go with your approach but to add a builder for creating Pods/Containers similarly to the other builders we have.
CLA should be signed now. |
CLAs look good, thanks! |
// non-existent build. | ||
// TODO(jasonhall): Just set this directly when creating a Pod from a | ||
// TaskRun. | ||
pod.OwnerReferences = []metav1.OwnerReference{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved this into MakePod
, but I'm not sure if that's ok or if we are trying to keep the MakePod code in sync with upstream for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, the MakePod function is not used at all right now. We still rely on the knative/build API to create a Build instead of using the MakePod above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, maybe I am misunderstanding something. As far as I can tell, we are import the resources package containing pod.go
from build-pipeline and then use that version here so that as of #326 we aren't actually using Knative/build, and in my local testing the changes I made to MakePod
did affect the output (maybe only the test code is using the local version of MakePod
?). This commit in #326 about forking pod.go
instead of vendoring it seems to support my understanding. Is there some kind of flag enabling different behavior in testing vs production usage or something, or am I missing some key branch in a code path somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tejal29 : @imjasonh submitted PR to remove dependency on Build API. Currently there is code dependency but TaskRun is not creating Build CRD object anymore.
@dwnusbaum : There is no need for maintaining code similarity with Build code base. If it makes sense to pull the logic into separate function then please feel free to do so.
@@ -360,15 +360,19 @@ func (c *Reconciler) createTaskRun(logger *zap.SugaredLogger, rprt *resources.Re | |||
taskRunTimeout = nil | |||
} | |||
|
|||
labels := make(map[string]string, len(pr.ObjectMeta.Labels)+2) | |||
for key, val := range pr.ObjectMeta.Labels { | |||
labels[key] = val |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the keys need to be validated based on my reading of https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/. Will that be handled by Kubernetes itself when the PipelineRun/TaskRun are created, in which case we don't need to do anything since these labels only come from those resources, or do we need to do some validation here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes i am pretty sure, its handled by the kubernetes api and we don't need to do any thing here
/ok-to-test |
@@ -360,15 +360,19 @@ func (c *Reconciler) createTaskRun(logger *zap.SugaredLogger, rprt *resources.Re | |||
taskRunTimeout = nil | |||
} | |||
|
|||
labels := make(map[string]string, len(pr.ObjectMeta.Labels)+2) | |||
for key, val := range pr.ObjectMeta.Labels { | |||
labels[key] = val |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes i am pretty sure, its handled by the kubernetes api and we don't need to do any thing here
test/builder/owner_reference.go
Outdated
@@ -0,0 +1,28 @@ | |||
/* | |||
Copyright 2018 The Knative Authors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2019
// non-existent build. | ||
// TODO(jasonhall): Just set this directly when creating a Pod from a | ||
// TaskRun. | ||
pod.OwnerReferences = []metav1.OwnerReference{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, the MakePod function is not used at all right now. We still rely on the knative/build API to create a Build instead of using the MakePod above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great PR @dwnusbaum
Thank you for submitting this. From glancing the PR it looks good. I will review in detail tomorrow. 👍
The following is the coverage report on pkg/.
|
Looks good to me, but will let @shashwathi and @tejal29 finish their review |
@@ -220,6 +219,14 @@ func MakePod(build *v1alpha1.Build, kubeclient kubernetes.Interface) (*corev1.Po | |||
} | |||
annotations["sidecar.istio.io/inject"] = "false" | |||
|
|||
labels := map[string]string{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Similar to pipelinerun pattern of adding label this could be similar to make(map[string]string, len(build.ObjectMeta.Labels)+1)
for key, val := range build.ObjectMeta.Labels { | ||
labels[key] = val | ||
} | ||
// TODO: Redundant with TaskRun label set in `taskrun.makeLabels`. Should |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously makelabels
function added an additional label to existing taskrun labels but the additional label is added in this line. So can we delete makelabels
function and pass taskrun labels directly to build? does that make sense @dwnusbaum ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makeLabels
and MakePod
add a label with different keys, but the same value. The key added in makeLabels
is pipeline.knative.dev/taskRun
, and the one added in MakePod
is build.knative.dev/buildName
. I wasn't sure if we needed to keep the buildName
label for compatibility or anything, but if not then yeah I think we should just delete it.
I think we need to keep makeLabels
, since that passes labels from TaskRun
to Build
, so that MakePod
can pass them from Build
to Pod
, but if we delete the extra label added in MakePod
then we could just set Pod.ObjectMeta.Labels = Build.ObjectMeta.Labels
so we don't need to create a new map in MakePod
. Otherwise we could make MakePod
take TaskRun
directly instead of Build
so that we could delete makeLabels
, but that seems like more work (or we could just add it as an additional parameter, but maybe that is confusing). What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following up with this is #494.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't sure if we needed to keep the
buildName
label for compatibility or anything, but if not then yeah I think we should just delete it.
I think there is no value in adding buildName label anymore.
I think we need to keep
makeLabels
,
I like this idea
then we could just set
Pod.ObjectMeta.Labels = Build.ObjectMeta.Labels
👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work. Great test coverage. Thank you for submitting such detailed PR 👍 🎆
I have left some minor code cleanup suggestions. I think this would be a good feature to have e2e test coverage as well.
Can you please consider updating go e2e test of pipelinerun with labels and adding assertion that taskruns / pods were created with expected labels? Let me know if you need any help with this. 👍
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dwnusbaum, shashwathi The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/assign @shashwathi |
/lgtm |
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@shashwathi Thanks for taking a look! I will try to update the e2e tests to use a label and let you know if I need any help. |
Previously, we added a label with key "build.knative.dev/buildName" to Pods. That label was redundant with a different label whose key is "pipeline.knative.dev/taskRun" as of beda1f8. Since we are no longer depending on Knative Build at runtime, it seems best to remove the redundant label. Followi-up to tektoncd#488.
Previously, we added a label with key "build.knative.dev/buildName" to Pods. That label was redundant with a different label whose key is "pipeline.knative.dev/taskRun" as of beda1f8. Since we are no longer depending on Knative Build at runtime, it seems best to remove the redundant label. Follow-up to tektoncd#488.
Previously, we added a label with key "build.knative.dev/buildName" to Pods. That label was redundant with a different label whose key is "pipeline.knative.dev/taskRun" as of beda1f8. Since we are no longer depending on Knative Build at runtime, it seems best to remove the redundant label. Follow-up to #488.
The e2e tests in test/pipelinerun.go now assert that custom labels set on the PipelineRun are propagated through to the TaskRun and Pod, and that the static labels that are added to all TaskRuns and Pods are propagated correctly as well. In addition, documentation has been added to explain that labels are propagated and to mention the specific labels that are added automatically to generated TaskRuns and Pods. Follow-up to tektoncd#488.
The e2e tests in test/pipelinerun.go now assert that custom labels set on the PipelineRun are propagated through to the TaskRun and Pod, and that the static labels that are added to all TaskRuns and Pods are propagated correctly as well. In addition, documentation has been added to explain that labels are propagated and to mention the specific labels that are added automatically to generated TaskRuns and Pods. Follow-up to tektoncd#488.
The e2e tests in test/pipelinerun.go now assert that custom labels set on the PipelineRun are propagated through to the TaskRun and Pod, and that the static labels that are added to all TaskRuns and Pods are propagated correctly as well. In addition, documentation has been added to explain that labels are propagated and to mention the specific labels that are added automatically to generated TaskRuns and Pods. Follow-up to #488.
Previously, labels were propagated from TaskRun to Build, but not from Build to Pod. This PR fixes that, and also propagates labels from PipelineRun to TaskRun, so that users are more easily able to filter, identify, and query Pods created by Build Pipeline.
In all cases, if a user specifies a label whose key overlaps with one of the labels that we set automatically, the user's label is ignored.
As mentioned in
taskrun_test.go
, the current tests do not fully demonstrate that the TaskRun to Pod propagation works correctly since they use Build as an intermediate form for comparison. I am happy to try to address that in this PR by introducing a builder for Pod and using that in the tests, but the changes were already getting a bit heavy intaskrun_test.go
due to wrapping all of theBuildSpec
s in aBuild
, so I wanted to submit this first to solicit feedback.Fixes #458
CC @abayer @jstrachan