Skip to content

Commit

Permalink
Propagate labels from Pipeline/Task to PipelineRun/TaskRun
Browse files Browse the repository at this point in the history
With this change, labels are propagated from Pipeline and Task to
PipelineRun and TaskRun, respectively, giving us full label propagation
from Pipeline to PipelineRun to TaskRun to Pod and Task to TaskRun to
Pod.

This commit also adds a label whose key is pipeline.knative.dev/task
to all TaskRuns that refer to a Task with a TaskRef (the label is not
added to TaskRuns using an embedded TaskSpec) that contains the name of
the Task.

Documentation related to labels was moved to labels.md in order
to avoid duplicating similar content across four other pages.

Fixes #501
  • Loading branch information
dwnusbaum authored and knative-prow-robot committed Feb 20, 2019
1 parent d0e426a commit 69ade03
Show file tree
Hide file tree
Showing 11 changed files with 281 additions and 119 deletions.
4 changes: 4 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,10 @@ components:
- [`PipelineRun`](pipelineruns.md)
- [`PipelineResource`](resources.md)

Additional reference topics not related to a specific component:

- [Labels](labels.md)

## Try it out

- Follow along with [the tutorial](tutorial.md)
Expand Down
68 changes: 68 additions & 0 deletions docs/labels.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Labels

In order to make it easier to identify objects that are all part of the same
conceptual pipeline, custom [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
set on resources used by Tekton Pipelines are propagated from more general to
more specific resources, and a few labels are automatically added to make it
easier to identify relationships between those resources.

---

- [Propagation Details](#propagation-details)
- [Automatically Added Labels](#automatically-added-labels)
- [Examples](#examples)

---

## Propagation Details

For `Pipelines` executed using a `PipelineRun`, labels are propagated
automatically from `Pipelines` to `PipelineRuns` to `TaskRuns` and then to
`Pods`. Additionally, labels from the `Tasks` referenced by `TaskRuns` are
propagated to the corresponding `TaskRuns` and then to `Pods`.

For `TaskRuns` executed directly, not as part of a `Pipeline`, labels are
propagated from the referenced `Task` (if one exists, see the [Specifying a `Task`](taskruns.md#specifying-a-task)
section of the `TaskRun` documentation) to the corresponding `TaskRun` and then
to the `Pod`.

## Automatically Added Labels

The following labels are added to resources automatically:

- `tekton.dev/pipeline` is added to `PipelineRuns` (and propagated to
`TaskRuns` and `Pods`), and contains the name of the `Pipeline` that the
`PipelineRun` references.
- `tekton.dev/pipelineRun` is added to `TaskRuns` (and propagated to `TaskRuns`
and `Pods`) that are created automatically during the execution of a
`PipelineRun`, and contains the name of the `PipelineRun` that triggered the
creation of the `TaskRun`.
- `tekton.dev/task` is added to `TaskRuns` (and propagated to `Pods`) that
reference an existing `Task` (see the [Specifying a `Task`](taskruns.md#specifying-a-task)
section of the `TaskRun` documentation), and contains the name of the `Task`
that the `TaskRun` references.
- `tekton.dev/taskRun` is added to `Pods`, and contains the name of the
`TaskRun` that created the `Pod`.

## Examples

- [Finding Pods for a Specific PipelineRun](#finding-pods-for-a-specific-pipelinerun)
- [Finding TaskRuns for a Specific Task](#finding-taskruns-for-a-specific-task)

### Finding Pods for a Specific PipelineRun

To find all `Pods` created by a `PipelineRun` named test-pipelinerun, you
could use the following command:

```shell
kubectl get pods --all-namespaces -l tekton.dev/pipelineRun=test-pipelinerun
```

### Finding TaskRuns for a Specific Task

To find all `TaskRuns` that reference a `Task` named test-task, you
could use the following command:

```shell
kubectl get taskruns --all-namespaces -l tekton.dev/task=test-task
```
21 changes: 0 additions & 21 deletions docs/pipelineruns.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ Creation of a `PipelineRun` will trigger the creation of
- [Syntax](#syntax)
- [Resources](#resources)
- [Service account](#service-account)
- [Labels](#labels)
- [Cancelling a PipelineRun](#cancelling-a-pipelinerun)
- [Examples](#examples)

Expand Down Expand Up @@ -98,26 +97,6 @@ of the `TaskRun` resource object.
For examples and more information about specifying service accounts, see the
[`ServiceAccount`](./auth.md) reference topic.

## Labels

Any labels specified in the metadata field of a `PipelineRun` will be propagated
to the `TaskRuns` created automatically for each `Task` in the `Pipeline` and
then to the `Pods` created for those `TaskRuns`. In addition, the following
labels will be added automatically:

- `tekton.dev/pipeline` will contain the name of the `Pipeline`
- `tekton.dev/pipelineRun` will contain the name of the `PipelineRun`

These labels make it easier to find the resources that are associated with a
given pipeline.

For example, to find all `Pods` created by a `Pipeline` named test-pipeline, you
could use the following command:

```shell
kubectl get pods --all-namespaces -l tekton.dev/pipeline=test-pipeline
```

## Cancelling a PipelineRun

In order to cancel a running pipeline (`PipelineRun`), you need to update its
Expand Down
25 changes: 0 additions & 25 deletions docs/taskruns.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@ A `TaskRun` runs until all `steps` have completed or until a failure occurs.
- [Providing resources](#providing-resources)
- [Overriding where resources are copied from](#overriding-where-resources-are-copied-from)
- [Service Account](#service-account)
- [Labels](#labels)
- [Cancelling a TaskRun](#cancelling-a-taskrun)
- [Examples](#examples)

Expand Down Expand Up @@ -224,30 +223,6 @@ spec:
emptyDir: {}
```

## Labels

Any labels specified in the metadata field of a `TaskRun` will be propagated to
the `Pod` created to execute the `Task`. In addition, the following label will
be added automatically:

- `tekton.dev/taskRun` will contain the name of the `TaskRun`

If the `TaskRun` was created automatically by a `PipelineRun`, then the
following two labels will also be added to the `TaskRun` and `Pod`:

- `tekton.dev/pipeline` will contain the name of the `Pipeline`
- `tekton.dev/pipelineRun` will contain the name of the `PipelineRun`

These labels make it easier to find the resources that are associated with a
given `TaskRun`.

For example, to find all `Pods` created by a `TaskRun` named test-taskrun, you
could use the following command:

```shell
kubectl get pods --all-namespaces -l tekton.dev/taskRun=test-taskrun
```

## Cancelling a TaskRun

In order to cancel a running task (`TaskRun`), you need to update its spec to
Expand Down
1 change: 1 addition & 0 deletions pkg/apis/pipeline/register.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ package pipeline
// GroupName is the Kubernetes resource group name for Pipeline types.
const (
GroupName = "tekton.dev"
TaskLabelKey = "/task"
TaskRunLabelKey = "/taskRun"
PipelineLabelKey = "/pipeline"
PipelineRunLabelKey = "/pipelineRun"
Expand Down
36 changes: 33 additions & 3 deletions pkg/reconciler/v1alpha1/pipelinerun/pipelinerun.go
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ func (c *Reconciler) Reconcile(ctx context.Context, key string) error {
return err
}

// Reconcile this copy of the task run and then write back any status
// Reconcile this copy of the task run and then write back any status or label
// updates regardless of whether the reconciliation errored out.
err = c.reconcile(ctx, pr)
if equality.Semantic.DeepEqual(original.Status, pr.Status) {
Expand All @@ -184,6 +184,15 @@ func (c *Reconciler) Reconcile(ctx context.Context, key string) error {
c.Recorder.Event(pr, corev1.EventTypeWarning, eventReasonFailed, "PipelineRun failed to update")
return err
}
// Since we are using the status subresource, it is not possible to update
// the status and labels simultaneously.
if !reflect.DeepEqual(original.ObjectMeta.Labels, pr.ObjectMeta.Labels) {
if _, err := c.updateLabels(pr); err != nil {
c.Logger.Warn("Failed to update PipelineRun labels", zap.Error(err))
c.Recorder.Event(pr, corev1.EventTypeWarning, eventReasonFailed, "PipelineRun failed to update labels")
return err
}
}

if err == nil {
c.Recorder.Event(pr, corev1.EventTypeNormal, eventReasonSucceeded, "PipelineRun reconciled successfully")
Expand Down Expand Up @@ -225,6 +234,15 @@ func (c *Reconciler) reconcile(ctx context.Context, pr *v1alpha1.PipelineRun) er
// Apply parameter templating from the PipelineRun
p = resources.ApplyParameters(p, pr)

// Propagate labels from Pipeline to PipelineRun.
if pr.ObjectMeta.Labels == nil {
pr.ObjectMeta.Labels = make(map[string]string, len(p.ObjectMeta.Labels)+1)
}
for key, value := range p.ObjectMeta.Labels {
pr.ObjectMeta.Labels[key] = value
}
pr.ObjectMeta.Labels[pipeline.GroupName+pipeline.PipelineLabelKey] = p.Name

pipelineState, err := resources.ResolvePipelineRun(
*pr,
func(name string) (v1alpha1.TaskInterface, error) {
Expand Down Expand Up @@ -360,11 +378,11 @@ func (c *Reconciler) createTaskRun(logger *zap.SugaredLogger, rprt *resources.Re
taskRunTimeout = nil
}

labels := make(map[string]string, len(pr.ObjectMeta.Labels)+2)
// Propagate labels from PipelineRun to TaskRun.
labels := make(map[string]string, len(pr.ObjectMeta.Labels)+1)
for key, val := range pr.ObjectMeta.Labels {
labels[key] = val
}
labels[pipeline.GroupName+pipeline.PipelineLabelKey] = pr.Spec.PipelineRef.Name
labels[pipeline.GroupName+pipeline.PipelineRunLabelKey] = pr.Name

tr := &v1alpha1.TaskRun{
Expand Down Expand Up @@ -404,6 +422,18 @@ func (c *Reconciler) updateStatus(pr *v1alpha1.PipelineRun) (*v1alpha1.PipelineR
return newPr, nil
}

func (c *Reconciler) updateLabels(pr *v1alpha1.PipelineRun) (*v1alpha1.PipelineRun, error) {
newPr, err := c.pipelineRunLister.PipelineRuns(pr.Namespace).Get(pr.Name)
if err != nil {
return nil, fmt.Errorf("Error getting PipelineRun %s when updating labels: %s", pr.Name, err)
}
if !reflect.DeepEqual(pr.ObjectMeta.Labels, newPr.ObjectMeta.Labels) {
newPr.ObjectMeta.Labels = pr.ObjectMeta.Labels
return c.PipelineClientSet.TektonV1alpha1().PipelineRuns(pr.Namespace).Update(newPr)
}
return newPr, nil
}

// isDone returns true if the PipelineRun's status indicates the build is done.
func isDone(status *v1alpha1.PipelineRunStatus) bool {
return !status.GetCondition(duckv1alpha1.ConditionSucceeded).IsUnknown()
Expand Down
28 changes: 15 additions & 13 deletions pkg/reconciler/v1alpha1/taskrun/resources/taskspec.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ import (
"fmt"

"github.com/knative/build-pipeline/pkg/apis/pipeline/v1alpha1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

// GetTask is a function used to retrieve Tasks.
Expand All @@ -28,24 +29,25 @@ type GetTask func(string) (v1alpha1.TaskInterface, error)
// GetClusterTask is a function that will retrieve the Task from name and namespace.
type GetClusterTask func(name string) (v1alpha1.TaskInterface, error)

// GetTaskSpec will retrieve the Task Spec associated with the provieded TaskRun. This can come from a
// reference Task or from an embedded Task spec.
func GetTaskSpec(taskRunSpec *v1alpha1.TaskRunSpec, taskRunName string, getTask GetTask) (*v1alpha1.TaskSpec, string, error) {
// GetTaskData will retrieve the Task metadata and Spec associated with the
// provided TaskRun. This can come from a reference Task or from the TaskRun's
// metadata and embedded TaskSpec.
func GetTaskData(taskRun *v1alpha1.TaskRun, getTask GetTask) (*metav1.ObjectMeta, *v1alpha1.TaskSpec, error) {
taskMeta := metav1.ObjectMeta{}
taskSpec := v1alpha1.TaskSpec{}
taskName := ""
if taskRunSpec.TaskRef != nil && taskRunSpec.TaskRef.Name != "" {
if taskRun.Spec.TaskRef != nil && taskRun.Spec.TaskRef.Name != "" {
// Get related task for taskrun
t, err := getTask(taskRunSpec.TaskRef.Name)
t, err := getTask(taskRun.Spec.TaskRef.Name)
if err != nil {
return nil, taskName, fmt.Errorf("error when listing tasks for taskRun %s %v", taskRunName, err)
return nil, nil, fmt.Errorf("error when listing tasks for taskRun %s %v", taskRun.Name, err)
}
taskMeta = t.TaskMetadata()
taskSpec = t.TaskSpec()
taskName = t.TaskMetadata().Name
} else if taskRunSpec.TaskSpec != nil {
taskSpec = *taskRunSpec.TaskSpec
taskName = taskRunName
} else if taskRun.Spec.TaskSpec != nil {
taskMeta = taskRun.ObjectMeta
taskSpec = *taskRun.Spec.TaskSpec
} else {
return &taskSpec, taskName, fmt.Errorf("TaskRun %s not providing TaskRef or TaskSpec", taskRunName)
return nil, nil, fmt.Errorf("TaskRun %s not providing TaskRef or TaskSpec", taskRun.Name)
}
return &taskSpec, taskName, nil
return &taskMeta, &taskSpec, nil
}
62 changes: 41 additions & 21 deletions pkg/reconciler/v1alpha1/taskrun/resources/taskspec_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,20 +36,25 @@ func TestGetTaskSpec_Ref(t *testing.T) {
}},
},
}
spec := &v1alpha1.TaskRunSpec{
TaskRef: &v1alpha1.TaskRef{
Name: "orchestrate",
tr := &v1alpha1.TaskRun{
ObjectMeta: metav1.ObjectMeta{
Name: "mytaskrun",
},
Spec: v1alpha1.TaskRunSpec{
TaskRef: &v1alpha1.TaskRef{
Name: "orchestrate",
},
},
}
gt := func(n string) (v1alpha1.TaskInterface, error) { return task, nil }
taskSpec, name, err := GetTaskSpec(spec, "mytaskrun", gt)
taskMeta, taskSpec, err := GetTaskData(tr, gt)

if err != nil {
t.Fatalf("Did not expect error getting task spec but got: %s", err)
}

if name != "orchestrate" {
t.Errorf("Expected task name to be `orchestrate` but was %q", name)
if taskMeta.Name != "orchestrate" {
t.Errorf("Expected task name to be `orchestrate` but was %q", taskMeta.Name)
}

if len(taskSpec.Steps) != 1 || taskSpec.Steps[0].Name != "step1" {
Expand All @@ -58,21 +63,27 @@ func TestGetTaskSpec_Ref(t *testing.T) {
}

func TestGetTaskSpec_Embedded(t *testing.T) {
spec := &v1alpha1.TaskRunSpec{
TaskSpec: &v1alpha1.TaskSpec{
Steps: []corev1.Container{{
Name: "step1",
}},
}}
tr := &v1alpha1.TaskRun{
ObjectMeta: metav1.ObjectMeta{
Name: "mytaskrun",
},
Spec: v1alpha1.TaskRunSpec{
TaskSpec: &v1alpha1.TaskSpec{
Steps: []corev1.Container{{
Name: "step1",
}},
},
},
}
gt := func(n string) (v1alpha1.TaskInterface, error) { return nil, fmt.Errorf("shouldn't be called") }
taskSpec, name, err := GetTaskSpec(spec, "mytaskrun", gt)
taskMeta, taskSpec, err := GetTaskData(tr, gt)

if err != nil {
t.Fatalf("Did not expect error getting task spec but got: %s", err)
}

if name != "mytaskrun" {
t.Errorf("Expected task name for embedded task to default to name of task run but was %q", name)
if taskMeta.Name != "mytaskrun" {
t.Errorf("Expected task name for embedded task to default to name of task run but was %q", taskMeta.Name)
}

if len(taskSpec.Steps) != 1 || taskSpec.Steps[0].Name != "step1" {
Expand All @@ -81,22 +92,31 @@ func TestGetTaskSpec_Embedded(t *testing.T) {
}

func TestGetTaskSpec_Invalid(t *testing.T) {
spec := &v1alpha1.TaskRunSpec{}
tr := &v1alpha1.TaskRun{
ObjectMeta: metav1.ObjectMeta{
Name: "mytaskrun",
},
}
gt := func(n string) (v1alpha1.TaskInterface, error) { return nil, fmt.Errorf("shouldn't be called") }
_, _, err := GetTaskSpec(spec, "mytaskrun", gt)
_, _, err := GetTaskData(tr, gt)
if err == nil {
t.Fatalf("Expected error resolving spec with no embedded or referenced task spec but didn't get error")
}
}

func TestGetTaskSpec_Error(t *testing.T) {
spec := &v1alpha1.TaskRunSpec{
TaskRef: &v1alpha1.TaskRef{
Name: "orchestrate",
tr := &v1alpha1.TaskRun{
ObjectMeta: metav1.ObjectMeta{
Name: "mytaskrun",
},
Spec: v1alpha1.TaskRunSpec{
TaskRef: &v1alpha1.TaskRef{
Name: "orchestrate",
},
},
}
gt := func(n string) (v1alpha1.TaskInterface, error) { return nil, fmt.Errorf("something went wrong") }
_, _, err := GetTaskSpec(spec, "mytaskrun", gt)
_, _, err := GetTaskData(tr, gt)
if err == nil {
t.Fatalf("Expected error when unable to find referenced Task but got none")
}
Expand Down
Loading

0 comments on commit 69ade03

Please sign in to comment.