Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propagate labels from PipelineRun to TaskRun to Pods #488

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 8 additions & 4 deletions pkg/reconciler/v1alpha1/pipelinerun/pipelinerun.go
Original file line number Diff line number Diff line change
Expand Up @@ -360,15 +360,19 @@ func (c *Reconciler) createTaskRun(logger *zap.SugaredLogger, rprt *resources.Re
taskRunTimeout = nil
}

labels := make(map[string]string, len(pr.ObjectMeta.Labels)+2)
for key, val := range pr.ObjectMeta.Labels {
labels[key] = val
Copy link
Contributor Author

@dwnusbaum dwnusbaum Feb 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the keys need to be validated based on my reading of https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/. Will that be handled by Kubernetes itself when the PipelineRun/TaskRun are created, in which case we don't need to do anything since these labels only come from those resources, or do we need to do some validation here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes i am pretty sure, its handled by the kubernetes api and we don't need to do any thing here

}
labels[pipeline.GroupName+pipeline.PipelineLabelKey] = pr.Spec.PipelineRef.Name
labels[pipeline.GroupName+pipeline.PipelineRunLabelKey] = pr.Name

tr := &v1alpha1.TaskRun{
ObjectMeta: metav1.ObjectMeta{
Name: rprt.TaskRunName,
Namespace: pr.Namespace,
OwnerReferences: pr.GetOwnerReference(),
Labels: map[string]string{
pipeline.GroupName + pipeline.PipelineLabelKey: pr.Spec.PipelineRef.Name,
pipeline.GroupName + pipeline.PipelineRunLabelKey: pr.Name,
},
Labels: labels,
},
Spec: v1alpha1.TaskRunSpec{
TaskRef: &v1alpha1.TaskRef{
Expand Down
61 changes: 61 additions & 0 deletions pkg/reconciler/v1alpha1/pipelinerun/pipelinerun_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -586,3 +586,64 @@ func TestReconcileWithTimeout(t *testing.T) {
t.Errorf("TaskRun timeout %s should be less than or equal to PipelineRun timeout %s", actual.Spec.Timeout.Duration.String(), prs[0].Spec.Timeout.Duration.String())
}
}

func TestReconcilePropagateLabels(t *testing.T) {
ps := []*v1alpha1.Pipeline{tb.Pipeline("test-pipeline", "foo", tb.PipelineSpec(
tb.PipelineTask("hello-world-1", "hello-world"),
))}
prs := []*v1alpha1.PipelineRun{tb.PipelineRun("test-pipeline-run-with-labels", "foo",
tb.PipelineRunLabel("PipelineRunLabel", "PipelineRunValue"),
tb.PipelineRunLabel("pipeline.knative.dev/pipeline", "WillNotBeUsed"),
tb.PipelineRunSpec("test-pipeline",
tb.PipelineRunServiceAccount("test-sa"),
),
)}
ts := []*v1alpha1.Task{tb.Task("hello-world", "foo")}

d := test.Data{
PipelineRuns: prs,
Pipelines: ps,
Tasks: ts,
}

// create fake recorder for testing
fr := record.NewFakeRecorder(2)

testAssets := getPipelineRunController(d, fr)
c := testAssets.Controller
clients := testAssets.Clients

err := c.Reconciler.Reconcile(context.Background(), "foo/test-pipeline-run-with-labels")
if err != nil {
t.Errorf("Did not expect to see error when reconciling completed PipelineRun but saw %s", err)
}

// Check that the PipelineRun was reconciled correctly
_, err = clients.Pipeline.Pipeline().PipelineRuns("foo").Get("test-pipeline-run-with-labels", metav1.GetOptions{})
if err != nil {
t.Fatalf("Somehow had error getting completed reconciled run out of fake client: %s", err)
}

// Check that the expected TaskRun was created
actual := clients.Pipeline.Actions()[0].(ktesting.CreateAction).GetObject().(*v1alpha1.TaskRun)
if actual == nil {
t.Errorf("Expected a TaskRun to be created, but it wasn't.")
}
expectedTaskRun := tb.TaskRun("test-pipeline-run-with-labels-hello-world-1", "foo",
tb.TaskRunOwnerReference("PipelineRun", "test-pipeline-run-with-labels",
tb.OwnerReferenceAPIVersion("pipeline.knative.dev/v1alpha1"),
tb.Controller, tb.BlockOwnerDeletion,
),
tb.TaskRunLabel("pipeline.knative.dev/pipeline", "test-pipeline"),
tb.TaskRunLabel("pipeline.knative.dev/pipelineRun", "test-pipeline-run-with-labels"),
tb.TaskRunLabel("PipelineRunLabel", "PipelineRunValue"),
tb.TaskRunSpec(
tb.TaskRunTaskRef("hello-world"),
tb.TaskRunServiceAccount("test-sa"),
),
)

if d := cmp.Diff(actual, expectedTaskRun); d != "" {
t.Errorf("expected to see TaskRun %v created. Diff %s", expectedTaskRun, d)
}
}
23 changes: 11 additions & 12 deletions pkg/reconciler/v1alpha1/taskrun/resources/pod.go
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@ import (

corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/kubernetes"

"github.com/knative/build-pipeline/pkg/credentials"
Expand Down Expand Up @@ -220,6 +219,14 @@ func MakePod(build *v1alpha1.Build, kubeclient kubernetes.Interface) (*corev1.Po
}
annotations["sidecar.istio.io/inject"] = "false"

labels := map[string]string{}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Similar to pipelinerun pattern of adding label this could be similar to make(map[string]string, len(build.ObjectMeta.Labels)+1)

for key, val := range build.ObjectMeta.Labels {
labels[key] = val
}
// TODO: Redundant with TaskRun label set in `taskrun.makeLabels`. Should
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously makelabels function added an additional label to existing taskrun labels but the additional label is added in this line. So can we delete makelabels function and pass taskrun labels directly to build? does that make sense @dwnusbaum ?

Copy link
Contributor Author

@dwnusbaum dwnusbaum Feb 12, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makeLabels and MakePod add a label with different keys, but the same value. The key added in makeLabels is pipeline.knative.dev/taskRun, and the one added in MakePod is build.knative.dev/buildName. I wasn't sure if we needed to keep the buildName label for compatibility or anything, but if not then yeah I think we should just delete it.

I think we need to keep makeLabels, since that passes labels from TaskRun to Build, so that MakePod can pass them from Build to Pod, but if we delete the extra label added in MakePod then we could just set Pod.ObjectMeta.Labels = Build.ObjectMeta.Labels so we don't need to create a new map in MakePod. Otherwise we could make MakePod take TaskRun directly instead of Build so that we could delete makeLabels, but that seems like more work (or we could just add it as an additional parameter, but maybe that is confusing). What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following up with this is #494.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure if we needed to keep the buildName label for compatibility or anything, but if not then yeah I think we should just delete it.

I think there is no value in adding buildName label anymore.

I think we need to keep makeLabels,

I like this idea

then we could just set Pod.ObjectMeta.Labels = Build.ObjectMeta.Labels

👍

// probably be removed once we translate from TaskRun to Pod directly.
labels[buildNameLabelKey] = build.Name

cred, secrets, err := makeCredentialInitializer(build, kubeclient)
if err != nil {
return nil, err
Expand Down Expand Up @@ -298,18 +305,10 @@ func MakePod(build *v1alpha1.Build, kubeclient kubernetes.Interface) (*corev1.Po
// is deleted and re-created with the same name.
// We don't use GenerateName here because k8s fakes don't support it.
Name: fmt.Sprintf("%s-pod-%s", build.Name, gibberish),
// If our parent Build is deleted, then we should be as well.
OwnerReferences: []metav1.OwnerReference{
*metav1.NewControllerRef(build, schema.GroupVersionKind{
Group: v1alpha1.SchemeGroupVersion.Group,
Version: v1alpha1.SchemeGroupVersion.Version,
Kind: "Build",
}),
},
// If our parent TaskRun is deleted, then we should be as well.
OwnerReferences: build.OwnerReferences,
Annotations: annotations,
Labels: map[string]string{
buildNameLabelKey: build.Name,
},
Labels: labels,
},
Spec: corev1.PodSpec{
// If the build fails, don't restart it.
Expand Down
10 changes: 1 addition & 9 deletions pkg/reconciler/v1alpha1/taskrun/taskrun.go
Original file line number Diff line number Diff line change
Expand Up @@ -430,14 +430,6 @@ func (c *Reconciler) createBuildPod(ctx context.Context, tr *v1alpha1.TaskRun, t
return nil, fmt.Errorf("translating Build to Pod: %v", err)
}

// Rewrite the pod's OwnerRef to point the TaskRun, instead of a
// non-existent build.
// TODO(jasonhall): Just set this directly when creating a Pod from a
// TaskRun.
pod.OwnerReferences = []metav1.OwnerReference{
Copy link
Contributor Author

@dwnusbaum dwnusbaum Feb 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved this into MakePod, but I'm not sure if that's ok or if we are trying to keep the MakePod code in sync with upstream for now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, the MakePod function is not used at all right now. We still rely on the knative/build API to create a Build instead of using the MakePod above.

Copy link
Contributor Author

@dwnusbaum dwnusbaum Feb 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, maybe I am misunderstanding something. As far as I can tell, we are import the resources package containing pod.go from build-pipeline and then use that version here so that as of #326 we aren't actually using Knative/build, and in my local testing the changes I made to MakePod did affect the output (maybe only the test code is using the local version of MakePod?). This commit in #326 about forking pod.go instead of vendoring it seems to support my understanding. Is there some kind of flag enabling different behavior in testing vs production usage or something, or am I missing some key branch in a code path somewhere?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tejal29 : @imjasonh submitted PR to remove dependency on Build API. Currently there is code dependency but TaskRun is not creating Build CRD object anymore.

@dwnusbaum : There is no need for maintaining code similarity with Build code base. If it makes sense to pull the logic into separate function then please feel free to do so.

*metav1.NewControllerRef(tr, groupVersionKind),
}

return c.KubeClientSet.CoreV1().Pods(tr.Namespace).Create(pod)
}

Expand Down Expand Up @@ -488,10 +480,10 @@ func createRedirectedBuild(ctx context.Context, bs *buildv1alpha1.BuildSpec, tr
// makeLabels constructs the labels we will apply to TaskRun resources.
func makeLabels(s *v1alpha1.TaskRun) map[string]string {
labels := make(map[string]string, len(s.ObjectMeta.Labels)+1)
labels[pipeline.GroupName+pipeline.TaskRunLabelKey] = s.Name
for k, v := range s.ObjectMeta.Labels {
labels[k] = v
}
labels[pipeline.GroupName+pipeline.TaskRunLabelKey] = s.Name
return labels
}

Expand Down
Loading