diff --git a/README.md b/README.md index d90f75417a7..b471e9e4a61 100644 --- a/README.md +++ b/README.md @@ -1,8 +1,8 @@ -# ![pipe](./docs/images/pipe.png) Pipeline CRD +# ![pipe](./pipe.png) Pipelines [![Go Report Card](https://goreportcard.com/badge/knative/build-pipeline)](https://goreportcard.com/report/knative/build-pipeline) -The Pipeline CRD provides k8s-style resources for declaring CI/CD-style +The Pipeline project provides k8s-style resources for declaring CI/CD-style pipelines. Pipelines are **Cloud Native**: @@ -27,8 +27,7 @@ Pipelines are **Typed**: ## Want to start using Pipelines? - Jump in with [the tutorial!](docs/tutorial.md) -- [Learn about the Concepts](/docs/Concepts.md) -- [Read about how to use it](/docs/using.md) +- [Read about it](/docs/README.md) - Look at [some examples](/examples) ## Want to contribute? diff --git a/docs/Concepts.md b/docs/Concepts.md deleted file mode 100644 index ba6ec374c67..00000000000 --- a/docs/Concepts.md +++ /dev/null @@ -1,187 +0,0 @@ -# Pipeline CRDs - -Pipeline CRDs is an open source implementation to configure and run CI/CD style -pipelines for your Kubernetes application. - -Pipeline CRDs creates -[Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) -as building blocks to declare pipelines. - -A custom resource is an extension of Kubernetes API which can create a custom -[Kubernetes Object](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#understanding-kubernetes-objects). -Once a custom resource is installed, users can create and access its objects -with kubectl, just as they do for built-in resources like pods, deployments etc. -These resources run on-cluster and are implemented by -[Kubernetes Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions). - -High level details of this design: - -- [Pipelines](#pipeline) do not know what will trigger them, they can be - triggered by events or by manually creating [PipelineRuns](#pipelinerun) -- [Tasks](#task) can exist and be invoked completely independently of - [Pipelines](#pipeline); they are highly cohesive and loosely coupled -- [Tasks](#task) can depend on artifacts, output and parameters created by other - tasks. -- [PipelineResources](#pipelineresources) are the artifacts used as inputs and - outputs of Tasks. - -## Building Blocks of Pipeline CRDs - -Below diagram lists the main custom resources created by Pipeline CRDs: - -- [`Task`](#task) -- [`ClusterTask`](#clustertask) -- [`Pipeline`](#pipeline) -- [Runs](#runs) - - [`PipelineRun`](#pipelinerun) - - [`TaskRun`](#taskrun) -- [`PipelineResources`](#pipelineresources) - -![Building Blocks](./images/building-blocks.png) - -### Task - -A `Task` is a collection of sequential steps you would want to run as part of -your continuous integration flow. A task will run inside a container on your -cluster. - -A `Task` declares: - -- [Inputs](#inputs) -- [Outputs](#outputs) -- [Steps](#steps) - -#### Inputs - -Declare the inputs the `Task` needs. Every `Task` input resource should provide -name and type (like git, image). - -#### Outputs - -Outputs declare the outputs `Task` will produce. - -#### Steps - -Steps is a sequence of steps to execute. Each step is -[a container image](./using.md#image-contract). - -Here is an example simple `Task` definition which echoes "hello world". The -`hello-world` task does not define any inputs or outputs. - -It only has one step named `echo`. The step uses the builder image `busybox` -whose entrypoint set to `/bin/sh`. - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: Task -metadata: - name: hello-world - namespace: default -spec: - steps: - - name: echo - image: busybox - command: - - echo - args: - - "hello world!" -``` - -Examples of `Task` definitions with inputs and outputs are [here](../examples) - -#### Cluster Task - -A `ClusterTask` is similar to `Task` but with a cluster-wide scope. Cluster -Tasks are available in all namespaces, typically used to conveniently provide -commonly used tasks to users. - -#### Pipeline - -A `Pipeline` describes a graph of [Tasks](#Task) to execute. - -Below, is a simple pipeline which runs `hello-world-task` twice one after the -other. - -In this `echo-hello-twice` pipeline, there are two named tasks; -`hello-world-first` and `hello-world-again`. - -Both the tasks, refer to [`Task`](#Task) `hello-world` mentioned in `taskRef` -config. - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: Pipeline -metadata: - name: echo-hello-twice - namespace: default -spec: - tasks: - - name: hello-world-first - taskRef: - name: hello-world - - name: hello-world-again - taskRef: - name: hello-world -``` - -Examples of more complex `Pipelines` are [in our examples dir](../examples/). - -#### PipelineResources - -`PipelinesResources` in a pipeline are the set of objects that are going to be -used as inputs to a [`Task`](#Task) and can be output by a [`Task`](#Task). - -A [`Task`] can have multiple inputs and outputs. - -For example: - -- A Task's input could be a GitHub source which contains your application code. -- A Task's output can be your application container image which can be then - deployed in a cluster. -- A Task's output can be a jar file to be uploaded to a storage bucket. - -Read more on PipelineResources and their types -[here](./using.md#creating-pipelineresources). - -`PipelineResources` in a Pipeline are the set of objects that are going to be -used as inputs and outputs of a `Task`. - -#### Runs - -To invoke a [`Pipeline`](#pipeline) or a [`Task`](#task), you must create a -corresponding `Run` object: - -- [TaskRun](#taskrun) -- [PipelineRun](#pipelinerun) - -##### TaskRun - -Creating a `TaskRun` invokes a [Task](#task), running all of the steps until -completion or failure. Creating a `TaskRun` requires satisfying all of the input -requirements of the `Task`. - -`TaskRun` definition includes `inputs`, `outputs` for `Task` referred in spec. - -Input and output resources include the PipelineResource's name in the `Task` -spec and a reference to the actual `PipelineResource` that should be used. - -`TaskRuns` can be created directly by a user or by a -[PipelineRun](#pipelinerun). - -##### PipelineRun - -Creating a `PipelineRun` invokes the pipeline, creating [TaskRuns](#taskrun) for -each task in the pipeline. - -A `PipelineRun` ties together: - -- A [Pipeline](#pipeline) -- The [PipelineResources](#pipelineresources) to use for each [Task](#task) -- Which **serviceAccount** to use (provided to all tasks) -- Where **results** are stored (e.g. in GCS) - -A `PipelineRun` could be created: - -- By a user manually -- In response to an event (e.g. in response to a GitHub event, possibly - processed via [Knative eventing](https://github.com/knative/eventing)) diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 00000000000..4b7b08f39c7 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,64 @@ +# Knative Pipelines + +Pipelines is an open source implementation to configure and run CI/CD style +pipelines for your Kubernetes application. + +Pipeline creates +[Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +as building blocks to declare pipelines. + +A custom resource is an extension of Kubernetes API which can create a custom +[Kubernetes Object](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#understanding-kubernetes-objects). +Once a custom resource is installed, users can create and access its objects +with kubectl, just as they do for built-in resources like pods, deployments etc. +These resources run on-cluster and are implemented by +[Kubernetes Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions). + +High level details of this design: + +- [Pipelines](pipelines.md) do not know what will trigger them, they can be + triggered by events or by manually creating [PipelineRuns](pipelineruns.md) +- [Tasks](tasks.md) can exist and be invoked completely independently of + [Pipelines](pipelines.md); they are highly cohesive and loosely coupled +- [Tasks](tasks.md) can depend on artifacts, output and parameters created by other + tasks. +- [Tasks](tasks.md) can be invoked via [TaskRuns](taskruns.md) +- [PipelineResources](#pipelineresources) are the artifacts used as inputs and + outputs of Tasks. + +## Usage + +- [How do I create a new Pipeline?](pipelines.md) +- [How do I make a Task?](tasks.md) +- [How do I make Resources?](resources.md) +- [How do I control auth?](auth.md) +- [How do I run a Pipeline?](pipelineruns.md) +- [How do I run a Task on its own?](taskruns.md) + +## Learn more + +See the following reference topics for information about each of the build +components: + +- [`Task`](tasks.md) +- [`TaskRun`](taskrun.md) +- [`Pipeline`](https://github.com/knative/docs/blob/master/pipeline/pipeline.md) +- [`PipelineRun`](https://github.com/knative/docs/blob/master/pipeline/pipelinerun.md) +- [`PipelineResource`](https://github.com/knative/docs/blob/master/pipeline/pipelineresource.md) + +## Try it out + +* Follow along with [the tutorial](tutorial.md) +* Look at [the examples](https://github.com/knative/build-pipeline/tree/master/examples) + +## Related info + +If you are interested in contributing to the Knative Build project, see the +[Knative Pipeline code repository](https://github.com/knative/build-pipeline). + +--- + +Except as otherwise noted, the content of this page is licensed under the +[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/), +and code samples are licensed under the +[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/docs/auth.md b/docs/auth.md new file mode 100644 index 00000000000..1d6467402aa --- /dev/null +++ b/docs/auth.md @@ -0,0 +1,382 @@ +# Authentication + +This document defines how authentication is provided during execution of a +`TaskRun` or a `PipelineRun` (referred to as `Runs` in this document). + +The build system supports two types of authentication, using Kubernetes' +first-class `Secret` types: + +- `kubernetes.io/basic-auth` +- `kubernetes.io/ssh-auth` + +Secrets of these types can be made available to the `Run` by attaching them to +the `ServiceAccount` as which it runs. + +### Exposing credentials + +In their native form, these secrets are unsuitable for consumption by Git and +Docker. For Git, they need to be turned into (some form of) `.gitconfig`. For +Docker, they need to be turned into a `~/.docker/config.json` file. Also, while +each of these supports has multiple credentials for multiple domains, those +credentials typically need to be blended into a single canonical keyring. + +To solve this, before any `PipelineResources` are retrieved, all `pods` execute +a credential initialization process that accesses each of its secrets and +aggregates them into their respective files in `$HOME`. + +## SSH authentication (Git) + +1. Define a `Secret` containing your SSH private key (in `secret.yaml`): + + ```yaml + apiVersion: v1 + kind: Secret + metadata: + name: ssh-key + annotations: + pipeline.knative.dev/git-0: https://github.com # Described below + type: kubernetes.io/ssh-auth + data: + ssh-privatekey: + # This is non-standard, but its use is encouraged to make this more secure. + known_hosts: + ``` + `pipeline.knative.dev/git-0` in the example above specifies which web address + these credentials belong to. See + [Guiding Credential Selection](#guiding-credential-selection) below for + more information. + +1. Generate the value of `ssh-privatekey` by copying the value of (for example) + `cat ~/.ssh/id_rsa | base64`. + +1. Copy the value of `cat ~/.ssh/known_hosts | base64` to the `known_hosts` + field. + +1. Next, direct a `ServiceAccount` to use this `Secret` (in `serviceaccount.yaml`): + + ```yaml + apiVersion: v1 + kind: ServiceAccount + metadata: + name: build-bot + secrets: + - name: ssh-key + ``` + +1. Then use that `ServiceAccount` in your `TaskRun` (in `run.yaml`): + + ```yaml + apiVersion: pipeline.knative.dev/v1alpha1 + kind: TaskRun + metadata: + name: build-push-task-run-2 + spec: + serviceAccount: buid-bot + taskRef: + name: build-push + ``` + +1. Or use that `ServiceAccount` in your `PipelineRun` (in `run.yaml`): + + ```yaml + apiVersion: pipeline.knative.dev/v1alpha1 + kind: PipelineRun + metadata: + name: demo-pipeline + namespace: default + spec: + serviceAccount: build-bot + pipelineRef: + name: demo-pipeline + ``` + +1. Execute the `Run`: + + ```shell + kubectl apply --filename secret.yaml serviceaccount.yaml run.yaml + ``` + +When the `Run` executes, before steps execute, a `~/.ssh/config` will be +generated containing the key configured in the `Secret`. This key is then used +to authenticate when retrieving any `PipelineResources`. + +## Basic authentication (Git) + +1. Define a `Secret` containing the username and password that the `Run` should + use to authenticate to a Git repository (in `secret.yaml`): + + ```yaml + apiVersion: v1 + kind: Secret + metadata: + name: basic-user-pass + annotations: + pipeline.knative.dev/git-0: https://github.com # Described below + type: kubernetes.io/basic-auth + stringData: + username: + password: + ``` + `pipeline.knative.dev/git-0` in the example above specifies which web address + these credentials belong to. See + [Guiding Credential Selection](#guiding-credential-selection) below for + more information. + +1. Next, direct a `ServiceAccount` to use this `Secret` (in `serviceaccount.yaml`): + + ```yaml + apiVersion: v1 + kind: ServiceAccount + metadata: + name: build-bot + secrets: + - name: basic-user-pass + ``` + +1. Then use that `ServiceAccount` in your `TaskRun` (in `run.yaml`): + + ```yaml + apiVersion: pipeline.knative.dev/v1alpha1 + kind: TaskRun + metadata: + name: build-push-task-run-2 + spec: + serviceAccount: buid-bot + taskRef: + name: build-push + ``` + +1. Or use that `ServiceAccount` in your `PipelineRun` (in `run.yaml`): + + ```yaml + apiVersion: pipeline.knative.dev/v1alpha1 + kind: PipelineRun + metadata: + name: demo-pipeline + namespace: default + spec: + serviceAccount: build-bot + pipelineRef: + name: demo-pipeline + ``` + +1. Execute the `Run`: + + ```shell + kubectl apply --filename secret.yaml serviceaccount.yaml run.yaml + ``` + +When this `Run` executes, before steps execute, a `~/.gitconfig` will be +generated containing the credentials configured in the `Secret`, and these +credentials are then used to authenticate when retrieving any `PipelineResources`. + +## Basic authentication (Docker) + +1. Define a `Secret` containing the username and password that the build should + use to authenticate to a Docker registry (in `secret.yaml`): + + ```yaml + apiVersion: v1 + kind: Secret + metadata: + name: basic-user-pass + annotations: + pipeline.knative.dev/docker-0: https://gcr.io # Described below + type: kubernetes.io/basic-auth + stringData: + username: + password: + ``` + `pipeline.knative.dev/docker-0` in the example above specifies which web + address these credentials belong to. See + [Guiding Credential Selection](#guiding-credential-selection) below for + more information. + +1. Next, direct a `ServiceAccount` to use this `Secret` (in `serviceaccount.yaml`): + + ```yaml + apiVersion: v1 + kind: ServiceAccount + metadata: + name: build-bot + secrets: + - name: basic-user-pass + ``` + +1. Then use that `ServiceAccount` in your `TaskRun` (in `run.yaml`): + + ```yaml + apiVersion: pipeline.knative.dev/v1alpha1 + kind: TaskRun + metadata: + name: build-push-task-run-2 + spec: + serviceAccount: buid-bot + taskRef: + name: build-push + ``` + +1. Or use that `ServiceAccount` in your `PipelineRun` (in `run.yaml`): + + ```yaml + apiVersion: pipeline.knative.dev/v1alpha1 + kind: PipelineRun + metadata: + name: demo-pipeline + namespace: default + spec: + serviceAccount: build-bot + pipelineRef: + name: demo-pipeline + ``` + +1. Execute the `Run`: + + ```shell + kubectl apply --filename secret.yaml serviceaccount.yaml run.yaml + ``` + +When the `Run` executes, before steps execute, a `~/.docker/config.json` will +be generated containing the credentials configured in the `Secret`, and these +credentials are then used to authenticate when retrieving any `PipelineResources`. + +### Guiding credential selection + +A `Run` might require many different types of authentication. For instance, a +`Run` might require access to multiple private Git repositories, and access to +many private Docker repositories. You can use annotations to guide which secret +to use to authenticate to different resources, for example: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + annotations: + pipeline.knative.dev/git-0: https://github.com + pipeline.knative.dev/git-1: https://gitlab.com + pipeline.knative.dev/docker-0: https://gcr.io +type: kubernetes.io/basic-auth +stringData: + username: + password: +``` + +This describes a "Basic Auth" (username and password) secret that should be used +to access Git repos at github.com and gitlab.com, as well as Docker repositories +at gcr.io. + +Similarly, for SSH: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + annotations: + piepline.knative.dev/git-0: github.com +type: kubernetes.io/ssh-auth +data: + ssh-privatekey: + # This is non-standard, but its use is encouraged to make this more secure. + # Omitting this results in the use of ssh-keyscan (see below). + known_hosts: +``` + +This describes an SSH key secret that should be used to access Git repos at +github.com only. + +Credential annotation keys must begin with `pipeline.knative.dev/docker-` or +`pipeline.knative.dev/git-`, and the value describes the URL of the host with which +to use the credential. + +## Implementation details + +### Docker `basic-auth` + +Given URLs, usernames, and passwords of the form: `https://url{n}.com`, +`user{n}`, and `pass{n}`, generate the following for Docker: + +```json +=== ~/.docker/config.json === +{ + "auths": { + "https://url1.com": { + "auth": "$(echo -n user1:pass1 | base64)", + "email": "not@val.id", + }, + "https://url2.com": { + "auth": "$(echo -n user2:pass2 | base64)", + "email": "not@val.id", + }, + ... + } +} +``` + +Docker doesn't support `kubernetes.io/ssh-auth`, so annotations on these types +are ignored. + +### Git `basic-auth` + +Given URLs, usernames, and passwords of the form: `https://url{n}.com`, +`user{n}`, and `pass{n}`, generate the following for Git: + +``` +=== ~/.gitconfig === +[credential] + helper = store +[credential "https://url1.com"] + username = "user1" +[credential "https://url2.com"] + username = "user2" +... +=== ~/.git-credentials === +https://user1:pass1@url1.com +https://user2:pass2@url2.com +... +``` + +### Git `ssh-auth` + +Given hostnames, private keys, and `known_hosts` of the form: `url{n}.com`, +`key{n}`, and `known_hosts{n}`, generate the following for Git: + +``` +=== ~/.ssh/id_key1 === +{contents of key1} +=== ~/.ssh/id_key2 === +{contents of key2} +... +=== ~/.ssh/config === +Host url1.com + HostName url1.com + IdentityFile ~/.ssh/id_key1 +Host url2.com + HostName url2.com + IdentityFile ~/.ssh/id_key2 +... +=== ~/.ssh/known_hosts === +{contents of known_hosts1} +{contents of known_hosts2} +... +``` + +Note: Because `known_hosts` is a non-standard extension of +`kubernetes.io/ssh-auth`, when it is not present this will be generated through +`ssh-keygen url{n}.com` instead. + +### Least privilege + +The secrets as outlined here will be stored into `$HOME` (by convention the +volume: `/builder/home`), and will be available to `Source` and all `Steps`. + +For sensitive credentials that should not be made available to some steps, do +not use the mechanisms outlined here. Instead, the user should declare an +explicit `Volume` from the `Secret` and manually `VolumeMount` it into the +`Step`. + +--- + +Except as otherwise noted, the content of this page is licensed under the +[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/), +and code samples are licensed under the +[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/docs/container-contract.md b/docs/container-contract.md new file mode 100644 index 00000000000..54fb0158187 --- /dev/null +++ b/docs/container-contract.md @@ -0,0 +1,68 @@ +# Container Contract + +Each container image used as a step in a [`Task`](task.md) must comply with a +specific contract. + +## Entrypoint + +When containers are run in a `Task`, the `entrypoint` of the container will be +overwritten with a custom binary that redirects the logs to a separate location +for aggregating the log output. As such, it is always recommended to explicitly +specify a command. + +When `command` is not explicitly set, the controller will attempt to lookup the +entrypoint from the remote registry. + +Due to this metadata lookup, if you use a private image as a step inside a +`Task`, the build-pipeline controller needs to be able to access that registry. +The simplest way to accomplish this is to add a `.docker/config.json` at +`$HOME/.docker/config.json`, which will then be used by the controller when +performing the lookup + +For example, in the following Task with the images, +`gcr.io/cloud-builders/gcloud` and `gcr.io/cloud-builders/docker`, the +entrypoint would be resolved from the registry, resulting in the tasks running +`gcloud` and `docker` respectively. + +```yaml +spec: + steps: + - image: gcr.io/cloud-builders/gcloud + command: [gcloud] + - image: gcr.io/cloud-builders/docker + command: [docker] +``` + +However, if the steps specified a custom `command`, that is what would be used. + +```yaml +spec: + steps: + - image: gcr.io/cloud-builders/gcloud + command: + - bash + - -c + - echo "Hello!" +``` + +You can also provide `args` to the image's `command`: + +```yaml +steps: + - image: ubuntu + command: ["/bin/bash"] + args: ["-c", "echo hello $FOO"] + env: + - name: "FOO" + value: "world" +``` + +_See [the installation guide](installing.md) if you would like to +[configure the entrypoint image](installing.md#configure-entrypoint-image)._ + +--- + +Except as otherwise noted, the content of this page is licensed under the +[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/), +and code samples are licensed under the +[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/docs/images/building-blocks.png b/docs/images/building-blocks.png deleted file mode 100644 index 8e5c800a1dc..00000000000 Binary files a/docs/images/building-blocks.png and /dev/null differ diff --git a/docs/pipelineruns.md b/docs/pipelineruns.md new file mode 100644 index 00000000000..9ad6eb4d42d --- /dev/null +++ b/docs/pipelineruns.md @@ -0,0 +1,107 @@ +# PipelineRuns + +This document defines `PipelineRuns` and their capabilities. + +On its own, a [`Pipeline`](pipelines.md) declares what [`Tasks`](tasks.md) to +run, and dependencies between [`Task`](tasks.md) inputs and outputs via +[`from`](pipelines.md#from). To execute the `Tasks` in the `Pipeline`, you +must create a `PipelineRun`. + +Creation of a `PipelineRun` will trigger the creation of +[`TaskRuns`](taskruns.md) for each `Task` in your pipeline. + +--- + +- [Syntax](#syntax) + - [Resources](#resources) + - [Service account](#service-account) +- [Cancelling a PipelineRun](#cancelling-a-pipelinerun) +- [Examples](#examples) + +## Syntax + +To define a configuration file for a `PipelineRun` resource, you can specify the +following fields: + +- Required: + - [`apiVersion`][kubernetes-overview] - Specifies the API version, for example + `pipeline.knative.dev/v1alpha1`. + - [`kind`][kubernetes-overview] - Specify the `PipelineRun` resource object. + - [`metadata`][kubernetes-overview] - Specifies data to uniquely identify the + `PipelineRun` resource object, for example a `name`. + - [`spec`][kubernetes-overview] - Specifies the configuration information for + your `PipelineRun` resource object. + - `pipelineRef` or `taskSpec`- Specifies the [`Pipeline`](pipelines.md) you want + to run. + - `trigger` - Provides data about what created this `PipelineRun`. The only type + at this time is `manual`. +- Optional: + - [`resources`](#resources) - Specifies which [`PipelineResources`](resources.md) + to use for this `PipelineRun`. + - [`serviceAccount`](#service-account) - Specifies a `ServiceAccount` + resource object that enables your build to run with the defined + authentication information. + +[kubernetes-overview]: + https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields + + +### Resources + +When running a [`Pipeline`](pipelines.md), you will need to specify the +[`PipelineResources`](resources.md) to use with it. One `Pipeline` may +need to be run with different `PipelineResources` in cases such as: + +- When triggering the run of a `Pipeline` against a pull request, the triggering + system must specify the commitish of a git `PipelineResource` to use +- When invoking a `Pipeline` manually against one's own setup, one will need to + ensure that one's own GitHub fork (via the git `PipelineResource`), image + registry (via the image `PipelineResource`) and Kubernetes cluster (via the + cluster `PipelineResource`). + +Specify the `PipelineResources` in the PipelineRun using the `resources` section +in the `PipelineRun` spec, for example: + +```yaml +spec: + resources: + - name: source-repo + resourceRef: + name: skaffold-git + - name: web-image + resourceRef: + name: skaffold-image-leeroy-web + - name: app-image + resourceRef: + name: skaffold-image-leeroy-app +``` + +### Service Account + +Specifies the `name` of a `ServiceAccount` resource object. Use the +`serviceAccount` field to run your `Pipeline` with the privileges of the +specified service account. If no `serviceAccount` field is specified, your +resulting `TaskRuns` run using the +[`default` service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server) +that is in the +[namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) +of the `TaskRun` resource object. + +For examples and more information about specifying service accounts, see the +[`ServiceAccount`](./auth.md) reference topic. + +## Cancelling a PipelineRun + +In order to cancel a running pipeline (`PipelineRun`), you need to update its +spec to mark it as cancelled. Related `TaskRun` instances will be marked as +cancelled and running Pods will be deleted. + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: PipelineRun +metadata: + name: go-example-git +spec: + # […] + status: "PipelineRunCancelled" +``` \ No newline at end of file diff --git a/docs/pipelines.md b/docs/pipelines.md new file mode 100644 index 00000000000..0dc5ce2bbbb --- /dev/null +++ b/docs/pipelines.md @@ -0,0 +1,149 @@ +# Pipelines + +This document defines `Pipelines` and their capabilities. + +--- + +- [Syntax](#syntax) + - [Declared resources](#declared-resources) + - [Pipeline Tasks](#pipeline-tasks) + - [From](#from) +- [Examples](#examples) + +## Syntax + +To define a configuration file for a `Pipeline` resource, you can specify the +following fields: + +- Required: + - [`apiVersion`][kubernetes-overview] - Specifies the API version, for example + `pipeline.knative.dev/v1alpha1`. + - [`kind`][kubernetes-overview] - Specify the `Pipeline` resource object. + - [`metadata`][kubernetes-overview] - Specifies data to uniquely identify the + `Pipeline` resource object, for example a `name`. + - [`spec`][kubernetes-overview] - Specifies the configuration information for + your `Pipeline` resource object. In order for a `Pipeline` to do anything, the + spec must include: + - [`tasks`](#pipeline-tasks) - Specifies which `Tasks` to run and how to run them +- Optional: + - [`resources`](#declared-resources) - Specifies which [`PipelineResources`](resources.md) + of which types the `Pipeline` will be using in its [Tasks](#pipeline-tasks) + - [`timeout`](#timeout) - Specifies timeout after which the `Pipeline` will fail. + +[kubernetes-overview]: + https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields + +### Declared resources + +In order for a `Pipeline` to interact with the outside world, it will probably need +[`PipelineResources`](#creating-pipelineresources) which will be given to +`Tasks` as inputs and outputs. + +Your `Pipeline` must declare the `PipelineResources` it needs in a `resources` +section in the `spec`, giving each a name which will be used to refer to these +`PipelineResources` in the `Tasks`. + +For example: + +```yaml +spec: + resources: + - name: my-repo + type: git + - name: my-image + type: image +``` + +### Pipeline Tasks + +A `Pipeline` will execute a sequence of [`Tasks`](tasks.md) in the order they are declared in. +At a minimum, this declaration must include a reference to the `Task`: + +```yaml + tasks: + - name: build-the-image + taskRef: + name: build-push +``` + +[Declared `PipelineResources`](#declared-resources) can be given to `Task`s in the `Pipeline` as +inputs and outputs, for example: + +```yaml +spec: + tasks: + - name: build-the-image + taskRef: + name: build-push + resources: + inputs: + - name: workspace + resource: my-repo + outputs: + - name: image + resource: my-image +``` + +The resource `my-image` is expected to be given to the `deploy-app` `Task` from +the `build-app` `Task`. This means that the `PipelineResource` `my-image` must +also be declared as an output of `build-app`. + +[Parameters](tasks.md#parameters) can also be provided: + +```yaml +spec: + tasks: + - name: build-skaffold-web + taskRef: + name: build-push + params: + - name: pathToDockerFile + value: Dockerfile + - name: pathToContext + value: /workspace/examples/microservices/leeroy-web +``` + +#### from + +Sometimes you will have `Tasks` that need to take as input the output of a +previous `Task`, for example, an image built by a previous `Task`. + +Express this dependency by adding `from` on `Resources` that your `Tasks` need. + +- The (optional) `from` key on an `input source` defines a set of previous + `PipelineTasks` (i.e. the named instance of a `Task`) in the `Pipeline` +- When the `from` key is specified on an input source, the version of the + resource that is from the defined list of tasks is used +- `from` can support fan in and fan out +- The name of the `PipelineResource` must correspond to a `PipelineResource` + from the `Task` that the referenced `PipelineTask` gives as an output + +For example see this `Pipeline` spec: + +```yaml +- name: build-app + taskRef: + name: build-push + resources: + outputs: + - name: image + resource: my-image +- name: deploy-app + taskRef: + name: deploy-kubectl + resources: + inputs: + - name: my-image + from: + - build-app +``` + +The resource `my-image` is expected to be given to the `deploy-app` `Task` from +the `build-app` `Task`. This means that the `PipelineResource` `my-image` must +also be declared as an output of `build-app`. + +For implementation details, see [the developer docs](docs/developers/README.md). + +## Examples + +For complete examples, see [the examples folder](https://github.com/knative/build-pipeline/tree/master/examples). \ No newline at end of file diff --git a/docs/resources.md b/docs/resources.md new file mode 100644 index 00000000000..44d27d1fe41 --- /dev/null +++ b/docs/resources.md @@ -0,0 +1,360 @@ +# PipelineResources + +`PipelinesResources` in a pipeline are the set of objects that are going to be +used as inputs to a [`Task`](task.md) and can be output by a `Task`. + +A `Task` can have multiple inputs and outputs. + +For example: + +- A `Task`'s input could be a GitHub source which contains your application code. +- A `Task`'s output can be your application container image which can be then + deployed in a cluster. +- A `Task`'s output can be a jar file to be uploaded to a storage bucket. + +--- + +- [Syntax](#syntax) + - [Declared resources](#declared-resources) + - [Pipeline Tasks](#pipeline-tasks) + - [From](#from) +- [Examples](#examples) + +## Syntax + +To define a configuration file for a `PipelineResource`, you can specify the +following fields: + +- Required: + - [`apiVersion`][kubernetes-overview] - Specifies the API version, for example + `pipeline.knative.dev/v1alpha1`. + - [`kind`][kubernetes-overview] - Specify the `PipelineResource` resource object. + - [`metadata`][kubernetes-overview] - Specifies data to uniquely identify the + `PipelineResource` object, for example a `name`. + - [`spec`][kubernetes-overview] - Specifies the configuration information for + your `PipelineResource` resource object. + - [`type`](#resource-types) - Specifies the `type` of the `PipelineResource` +- Optional: + - [`params`](#resource-types) - Parameters which are specific to each type of `PipelineResource` + +[kubernetes-overview]: + https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields + +### Resource Types + +The following `PipelineResources` are currently supported: + +- [Git resource](#git-resource) +- [Image resource](#image-resource) +- [Cluster resource](#cluster-resource) +- [Storage resource](#storage-resource) + +#### Git Resource + +Git resource represents a [git](https://git-scm.com/) repository, that contains +the source code to be built by the pipeline. Adding the git resource as an input +to a Task will clone this repository and allow the Task to perform the required +actions on the contents of the repo. + +To create a git resource using the `PipelineResource` CRD: + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: PipelineResource +metadata: + name: wizzbang-git + namespace: default +spec: + type: git + params: + - name: url + value: https://github.com/wizzbangcorp/wizzbang.git + - name: revision + value: master +``` + +Params that can be added are the following: + +1. `url`: represents the location of the git repository, you can use this to + change the repo, e.g. [to use a fork](#using-a-fork) +1. `revision`: Git + [revision](https://git-scm.com/docs/gitrevisions#_specifying_revisions) + (branch, tag, commit SHA or ref) to clone. You can use this to control what + commit [or branch](#using-a-branch) is used. _If no revision is specified, + the resource will default to `latest` from `master`._ + +##### Using a fork + +The `Url` parameter can be used to point at any git repository, for example to +use a GitHub fork at master: + +```yaml +spec: + type: git + params: + - name: url + value: https://github.com/bobcatfish/wizzbang.git +``` + +##### Using a branch + +The `revision` can be any +[git commit-ish (revision)](https://git-scm.com/docs/gitrevisions#_specifying_revisions). +You can use this to create a git `PipelineResource` that points at a branch, for +example: + +```yaml +spec: + type: git + params: + - name: url + value: https://github.com/wizzbangcorp/wizzbang.git + - name: revision + value: some_awesome_feature +``` + +To point at a pull request, you can use +[the pull requests's branch](https://help.github.com/articles/checking-out-pull-requests-locally/): + +```yaml +spec: + type: git + params: + - name: url + value: https://github.com/wizzbangcorp/wizzbang.git + - name: revision + value: refs/pull/52525/head +``` + +#### Image Resource + +An Image resource represents an image that lives in a remote repository. It is +usually used as [a `Task` `output`](concepts.md#task) for `Tasks` that build +images. This allows the same `Tasks` to be used to generically push to any +registry. + +Params that can be added are the following: + +1. `url`: The complete path to the image, including the registry and the image + tag +2. `digest`: The + [image digest](https://success.docker.com/article/images-tagging-vs-digests) + which uniquely identifies a particular build of an image with a particular + tag. + +For example: + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: PipelineResource +metadata: + name: kritis-resources-image + namespace: default +spec: + type: image + params: + - name: url + value: gcr.io/staging-images/kritis +``` + +#### Cluster Resource + +Cluster Resource represents a Kubernetes cluster other than the current cluster +the pipeline CRD is running on. A common use case for this resource is to deploy +your application/function on different clusters. + +The resource will use the provided parameters to create a +[kubeconfig](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) +file that can be used by other steps in the pipeline Task to access the target +cluster. The kubeconfig will be placed in +`/workspace//kubeconfig` on your Task container + +The Cluster resource has the following parameters: + +- Name: The name of the Resource is also given to cluster, will be used in the + kubeconfig and also as part of the path to the kubeconfig file +- URL (required): Host url of the master node +- Username (required): the user with access to the cluster +- Password: to be used for clusters with basic auth +- Token: to be used for authentication, if present will be used ahead of the + password +- Insecure: to indicate server should be accessed without verifying the TLS + certificate. +- CAData (required): holds PEM-encoded bytes (typically read from a root + certificates bundle). + +Note: Since only one authentication technique is allowed per user, either a +token or a password should be provided, if both are provided, the password will +be ignored. + +The following example shows the syntax and structure of a Cluster Resource: + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: PipelineResource +metadata: + name: test-cluster +spec: + type: cluster + params: + - name: url + value: https://10.10.10.10 # url to the cluster master node + - name: cadata + value: LS0tLS1CRUdJTiBDRVJ..... + - name: token + value: ZXlKaGJHY2lPaU.... +``` + +For added security, you can add the sensitive information in a Kubernetes +[Secret](https://kubernetes.io/docs/concepts/configuration/secret/) and populate +the kubeconfig from them. + +For example, create a secret like the following example: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: target-cluster-secrets +data: + cadatakey: LS0tLS1CRUdJTiBDRVJUSUZ......tLQo= + tokenkey: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbX....M2ZiCg== +``` + +and then apply secrets to the cluster resource + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: PipelineResource +metadata: + name: test-cluster +spec: + type: cluster + params: + - name: url + value: https://10.10.10.10 + - name: username + value: admin + secrets: + - fieldName: token + secretKey: tokenKey + secretName: target-cluster-secrets + - fieldName: cadata + secretKey: cadataKey + secretName: target-cluster-secrets +``` + +Example usage of the cluster resource in a Task: + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: Task +metadata: + name: deploy-image + namespace: default +spec: + inputs: + resources: + - name: workspace + type: git + - name: dockerimage + type: image + - name: testcluster + type: cluster + steps: + - name: deploy + image: image-wtih-kubectl + command: ["bash"] + args: + - "-c" + - kubectl --kubeconfig + /workspace/${inputs.resources.testCluster.Name}/kubeconfig --context + ${inputs.resources.testCluster.Name} apply -f /workspace/service.yaml' +``` + +#### Storage Resource + +Storage resource represents blob storage, that contains either an object or +directory. Adding the storage resource as an input to a Task will download the +blob and allow the Task to perform the required actions on the contents of the +blob. Blob storage type +[Google Cloud Storage](https://cloud.google.com/storage/)(gcs) is supported as +of now. + +##### GCS Storage Resource + +GCS Storage resource points to +[Google Cloud Storage](https://cloud.google.com/storage/) blob. + +To create a GCS type of storage resource using the `PipelineResource` CRD: + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: PipelineResource +metadata: + name: wizzbang-storage + namespace: default +spec: + type: storage + params: + - name: type + value: gcs + - name: location + value: gs://some-bucket +``` + +Params that can be added are the following: + +1. `location`: represents the location of the blob storage. +2. `type`: represents the type of blob storage. Currently there is + implementation for only `gcs`. +3. `dir`: represents whether the blob storage is a directory or not. By default + storage artifact is considered not a directory. + - If artifact is a directory then `-r`(recursive) flag is used to copy all + files under source directory to GCS bucket. Eg: + `gsutil cp -r source_dir gs://some-bucket` + - If artifact is a single file like zip, tar files then copy will be only 1 + level deep(no recursive). It will not trigger copy of sub directories in + source directory. Eg: `gsutil cp source.tar gs://some-bucket.tar`. + +Private buckets can also be configured as storage resources. To access GCS +private buckets, service accounts are required with correct permissions. +The `secrets` field on the storage resource is used for configuring this +information. +Below is an example on how to create a storage resource with service account. + +1. Refer to + [official documentation](https://cloud.google.com/compute/docs/access/service-accounts) + on how to create service accounts and configuring IAM permissions to access + bucket. +2. Create a Kubernetes secret from downloaded service account json key + + ```bash + $ kubectl create secret generic bucket-sa --from-file=./service_account.json + ``` + +3. To access GCS private bucket environment variable + [`GOOGLE_APPLICATION_CREDENTIALS`](https://cloud.google.com/docs/authentication/production) + should be set so apply above created secret to the GCS storage resource under + `fieldName` key. + + ```yaml + apiVersion: pipeline.knative.dev/v1alpha1 + kind: PipelineResource + metadata: + name: wizzbang-storage + namespace: default + spec: + type: storage + params: + - name: type + value: gcs + - name: location + value: gs://some-private-bucket + - name: dir + value: "directory" + secrets: + - fieldName: GOOGLE_APPLICATION_CREDENTIALS + secretName: bucket-sa + secretKey: service_account.json + ``` diff --git a/docs/task-parameters.md b/docs/task-parameters.md deleted file mode 100644 index e5681c1ef0d..00000000000 --- a/docs/task-parameters.md +++ /dev/null @@ -1,53 +0,0 @@ -## Task Parameters - -Tasks can declare input parameters that must be supplied to the task during a -TaskRun. Some example use-cases of this include: - -- A Task that needs to know what compilation flags to use when building an - application. -- A Task that needs to know what to name a built artifact. -- A Task that supports several different strategies, and leaves the choice up to - the other. - -### Usage - -The following example shows how Tasks can be parameterized, and these parameters -can be passed to the `Task` from a `TaskRun`. - -Input parameters in the form of `${inputs.params.foo}` are replaced inside of -the build Steps. - -The following `Task` declares an input parameter called 'flags', and uses it in -the `steps.args` list. - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: Task -metadata: - name: task-with-parameters -spec: - inputs: - params: - - name: flags - value: string - steps: - - name: build - image: my-builder - args: ["build", "--flags=${inputs.params.flags}"] -``` - -The following `TaskRun` supplies a value for `flags`: - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: TaskRun -metadata: - name: run-with-parameters -spec: - taskRef: - name: task-with-parameters - inputs: - params: - - name: "flags" - value: "foo=bar,baz=bat" -``` diff --git a/docs/taskruns.md b/docs/taskruns.md new file mode 100644 index 00000000000..f0221b803d7 --- /dev/null +++ b/docs/taskruns.md @@ -0,0 +1,533 @@ +# TaskRuns + +Use the `TaskRun` resource object to create and run on-cluster processes to +completion. + +To create a `TaskRun` in Knative, you must first create a [`Task`](tasks.md) which +specifies one or more container images that you have implemented to perform and +complete a task. + +A `TaskRun` runs until all `steps` have completed or until a failure occurs. + +--- + +- [Syntax](#syntax) + - [Specifying a `Task`](#specifying-a-task) + - [Input parameters](#input-parameters) + - [Providing resources](#providing-resources) + - [Overriding where resources are copied from](#overriding-where-resources-are-copied-from) + - [Service Account](#service-account) +- [Cancelling a TaskRun](#cancelling-a-taskrun) +- [Examples](#examples) + +--- + +## Syntax + +To define a configuration file for a `TaskRun` resource, you can specify the +following fields: + +- Required: + - [`apiVersion`][kubernetes-overview] - Specifies the API version, for example + `pipeline.knative.dev/v1alpha1`. + - [`kind`][kubernetes-overview] - Specify the `TaskRun` resource object. + - [`metadata`][kubernetes-overview] - Specifies data to uniquely identify the + `TaskRun` resource object, for example a `name`. + - [`spec`][kubernetes-overview] - Specifies the configuration information for + your `TaskRun` resource object. + - [`taskRef` or `taskSpec`](#specifying-a-task) - Specifies the details of the + [`Task`](tasks.md) you want to run + - `trigger` - Provides data about what created this `TaskRun`. Can be `manual` + if you are creating this manually, or has a value of `PipelineRun` if it is + created as part of a [`PipelineRun`](pipelineruns.md) +- Optional: + - [`serviceAccount`](#service-account) - Specifies a `ServiceAccount` + resource object that enables your build to run with the defined + authentication information. + - [`inputs`] - Specifies [input parameters](#input-parameters) and + [input resources](#providing-resources) + - [`outputs`] - Specifies [output resources](#providing-resources) + +[kubernetes-overview]: + https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields + +### Specifying a task + +Since a `TaskRun` is an invocation of a [`Task`](tasks.md), you must specify what +`Task` to invoke. + +You can do this by providing a reference to an existing `Task`: + +```yaml +spec: + taskRef: + name: read-task +``` + +Or you can embed the spec of the `Task` directly in the `TaskRun`: + +```yaml +spec: + taskSpec: + inputs: + resources: + - name: workspace + type: git + steps: + - name: build-and-push + image: gcr.io/kaniko-project/executor + command: + - /kaniko/executor + args: + - --destination=gcr.io/my-project/gohelloworld +``` + +### Input parameters + +If a `Task` has [`parameters`](tasks.md#parameters), you can specify values for them +using the `input` section: + +```yaml +spec: + inputs: + params: + - name: flags + value: -someflag +``` + +If a parameter does not have a default value, it must be specified. + +### Providing resources + +If a `Task` requires [input resources](tasks.md#input-resources) or +[output resources](tasks.md#output-resources), they must be provided +to run the `Task`. + +They can be provided via references to existing [`PipelineResources`](resources.md): + +```yaml +spec: + inputs: + resources: + - name: workspace + resourceRef: + name: java-git-resource +``` + +Or by embedding the specs of the resources directly: + +```yaml +spec: + inputs: + resources: + - name: workspace + resourceSpec: + type: git + params: + - name: url + value: https://github.com/pivotal-nader-ziada/gohelloworld +``` + +### Service Account + +Specifies the `name` of a `ServiceAccount` resource object. Use the +`serviceAccount` field to run your `Task` with the privileges of the +specified service account. If no `serviceAccount` field is specified, your +`Task` runs using the +[`default` service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server) +that is in the +[namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) +of the `TaskRun` resource object. + +For examples and more information about specifying service accounts, see the +[`ServiceAccount`](./auth.md) reference topic. + +## Cancelling a TaskRun + +In order to cancel a running task (`TaskRun`), you need to update its spec to +mark it as cancelled. Running Pods will be deleted. + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: TaskRun +metadata: + name: go-example-git +spec: + # […] + status: "TaskRunCancelled" +``` + +### Overriding where resources are copied from + +When specifying input and output `PipelineResources`, you can optionally specify +`paths` for each resource. `paths` will be used by `TaskRun` as the resource's +new source paths i.e., copy the resource from specified list of paths. `TaskRun` +expects the folder and contents to be already present in specified paths. +`paths` feature could be used to provide extra files or altered version of +existing resource before execution of steps. + +Output resource includes name and reference to pipeline resource and optionally +`paths`. `paths` will be used by `TaskRun` as the resource's new destination +paths i.e., copy the resource entirely to specified paths. `TaskRun` will be +responsible for creating required directories and copying contents over. `paths` +feature could be used to inspect the results of taskrun after execution of +steps. + +`paths` feature for input and output resource is heavily used to pass same +version of resources across tasks in context of pipelinerun. + +In the following example, task and taskrun are defined with input resource, +output resource and step which builds war artifact. After execution of +taskrun(`volume-taskrun`), `custom` volume will have entire resource +`java-git-resource` (including the war artifact) copied to the destination path +`/custom/workspace/`. + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: Task +metadata: + name: volume-task + namespace: default +spec: + generation: 1 + inputs: + resources: + - name: workspace + type: git + steps: + - name: build-war + image: objectuser/run-java-jar #https://hub.docker.com/r/objectuser/run-java-jar/ + command: jar + args: ["-cvf", "projectname.war", "*"] + volumeMounts: + - name: custom-volume + mountPath: /custom +``` + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: TaskRun +metadata: + name: volume-taskrun + namespace: default +spec: + taskRef: + name: volume-task + inputs: + resources: + - name: workspace + resourceRef: + name: java-git-resource + outputs: + resources: + - name: workspace + paths: + - /custom/workspace/ + resourceRef: + name: java-git-resource + volumes: + - name: custom-volume + emptyDir: {} +``` + +## Examples + +- [Example TaskRun](#example-taskrun) +- [Example TaskRun with embedded specs](#example-with-embedded-specs) +- [Example Task reuse](#example-task-reuse) + +### Example TaskRun + +To run a `Task`, create a new `TaskRun` which defines all inputs, outputs that +the `Task` needs to run. Below is an example where Task `read-task` is run by +creating `read-repo-run`. Task `read-task` has git input resource and TaskRun +`read-repo-run` includes reference to `go-example-git`. + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: TaskRun +metadata: + name: read-repo-run +spec: + taskRef: + name: read-task + trigger: + type: manual + inputs: + resources: + - name: workspace + resourceRef: + name: go-example-git +--- +apiVersion: pipeline.knative.dev/v1alpha1 +kind: PipelineResource +metadata: + name: go-example-git +spec: + type: git + params: + - name: url + value: https://github.com/pivotal-nader-ziada/gohelloworld +--- +apiVersion: pipeline.knative.dev/v1alpha1 +kind: Task +metadata: + name: read-task +spec: + inputs: + resources: + - name: workspace + type: git + steps: + - name: readme + image: ubuntu + command: + - /bin/bash + args: + - "cat README.md" +``` + +### Example with embedded specs + +Another way of running a Task is embedding the TaskSpec in the taskRun yaml. +This can be useful for "one-shot" style runs, or debugging. +TaskRun resource can include either Task reference or TaskSpec but not both. +Below is an example where `build-push-task-run-2` includes `TaskSpec` and no +reference to Task. + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: PipelineResource +metadata: + name: go-example-git +spec: + type: git + params: + - name: url + value: https://github.com/pivotal-nader-ziada/gohelloworld +--- +apiVersion: pipeline.knative.dev/v1alpha1 +kind: TaskRun +metadata: + name: build-push-task-run-2 +spec: + trigger: + type: manual + inputs: + resources: + - name: workspace + resourceRef: + name: go-example-git + taskSpec: + inputs: + resources: + - name: workspace + type: git + steps: + - name: build-and-push + image: gcr.io/kaniko-project/executor + command: + - /kaniko/executor + args: + - --destination=gcr.io/my-project/gohelloworld +``` + +Input and output resources can also be embedded without creating Pipeline +Resources. TaskRun resource can include either a Pipeline Resource reference or +a Pipeline Resource Spec but not both. Below is an example where Git Pipeline +Resource Spec is provided as input for TaskRun `read-repo`. + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: TaskRun +metadata: + name: read-repo +spec: + taskRef: + name: read-task + trigger: + type: manual + inputs: + resources: + - name: workspace + resourceSpec: + type: git + params: + - name: url + value: https://github.com/pivotal-nader-ziada/gohelloworld +``` + +**Note**: TaskRun can embed both TaskSpec and resource spec at the same time. +The `TaskRun` will also serve as a record of the history of the invocations of the +`Task`. + +### Example Task Reuse + +For the sake of illustrating re-use, here are several example [`TaskRuns`](taskrun.md) +(including referenced [`PipelineResources`](resource.md)) instantiating the [`Task` +(`dockerfile-build-and-push`) in the `Task` example docs](tasks.md#example-task). + +Build `mchmarny/rester-tester`: + +```yaml +# The PipelineResource +metadata: + name: mchmarny-repo +spec: + type: git + params: + - name: url + value: https://github.com/mchmarny/rester-tester.git +``` + +```yaml +# The TaskRun +spec: + taskRef: + name: dockerfile-build-and-push + inputs: + resources: + - name: workspace + resourceRef: + name: mchmarny-repo + params: + - name: IMAGE + value: gcr.io/my-project/rester-tester +``` + +Build `googlecloudplatform/cloud-builder`'s `wget` builder: + +```yaml +# The PipelineResource +metadata: + name: cloud-builder-repo +spec: + type: git + params: + - name: url + value: https://github.com/googlecloudplatform/cloud-builders.git +``` + +```yaml +# The TaskRun +spec: + taskRef: + name: dockerfile-build-and-push + inputs: + resources: + - name: workspace + resourceRef: + name: cloud-builder-repo + params: + - name: IMAGE + value: gcr.io/my-project/wget + # Optional override to specify the subdirectory containing the Dockerfile + - name: DIRECTORY + value: /workspace/wget +``` + +Build `googlecloudplatform/cloud-builder`'s `docker` builder with `17.06.1`: + +```yaml +# The PipelineResource +metadata: + name: cloud-builder-repo +spec: + type: git + params: + - name: url + value: https://github.com/googlecloudplatform/cloud-builders.git +``` + +```yaml +# The TaskRun +spec: + taskRef: + name: dockerfile-build-and-push + inputs: + resources: + - name: workspace + resourceRef: + name: cloud-builder-repo + params: + - name: IMAGE + value: gcr.io/my-project/docker + # Optional overrides + - name: DIRECTORY + value: /workspace/docker + - name: DOCKERFILE_NAME + value: Dockerfile-17.06.1 +``` + +#### Using a `ServiceAccount` + +Specifying a `ServiceAccount` to access a private `git` repository: + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: TaskRun +metadata: + name: test-task-with-serviceaccount-git-ssh +spec: + serviceAccount: test-task-robot-git-ssh + inputs: + resources: + - name: workspace + type: git + steps: + - name: config + image: ubuntu + command: ["/bin/bash"] + args: ["-c", "cat README.md"] +``` + +Where `serviceAccount: test-build-robot-git-ssh` references the following +`ServiceAccount`: + +```yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + name: test-task-robot-git-ssh +secrets: + - name: test-git-ssh +``` + +And `name: test-git-ssh`, references the following `Secret`: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: test-git-ssh + annotations: + pipeline.knative.dev/git-0: github.com +type: kubernetes.io/ssh-auth +data: + # Generated by: + # cat id_rsa | base64 -w 0 + ssh-privatekey: LS0tLS1CRUdJTiBSU0EgUFJJVk.....[example] + # Generated by: + # ssh-keyscan github.com | base64 -w 0 + known_hosts: Z2l0aHViLmNvbSBzc2g.....[example] +``` + + +Specifies the `name` of a `ServiceAccount` resource object. Use the +`serviceAccount` field to run your `Task` with the privileges of the +specified service account. If no `serviceAccount` field is specified, your +`Task` runs using the +[`default` service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server) +that is in the +[namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) +of the `Task` resource object. + +For examples and more information about specifying service accounts, see the +[`ServiceAccount`](./auth.md) reference topic. + +--- + +Except as otherwise noted, the content of this page is licensed under the +[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/), +and code samples are licensed under the +[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/docs/tasks.md b/docs/tasks.md new file mode 100644 index 00000000000..b075755e9bb --- /dev/null +++ b/docs/tasks.md @@ -0,0 +1,447 @@ +# Tasks + +A `Task` (or a [`ClusterTask`](#clustertask)) is a collection of sequential steps you would want to run as part of +your continuous integration flow. A task will run inside a container on your +cluster. + +A `Task` declares: + +- [Inputs](#inputs) +- [Outputs](#outputs) +- [Steps](#steps) + +A `Task` is available within a namespace, and `ClusterTask` is available across entire Kubernetes cluster. + +--- + +- [ClusterTasks](#clustertask) +- [Syntax](#syntax) + - [Steps](#steps) + - [Inputs](#inputs) + - [Outputs](#outputs) + - [Controlling where resources are mounted](#controlling-where-resources-are-mounted) + - [Volumes](#volumes) + - [Templating](#templating) +- [Examples](#examples) + +## ClusterTask + +Similar to Task, but with a cluster scope. + +In case of using a ClusterTask, the `TaskRef` kind should be added. The default +kind is Task which represents a namespaced Task + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: Pipeline +metadata: + name: demo-pipeline + namespace: default +spec: + tasks: + - name: build-skaffold-web + taskRef: + name: build-push + kind: ClusterTask + params: .... +``` + +A `Task` functions exactly like a `ClusterTask`, and as such all references to `Task` below are also describing `ClusterTask`. + +## Syntax + +To define a configuration file for a `Task` resource, you can specify the +following fields: + +- Required: + - [`apiVersion`][kubernetes-overview] - Specifies the API version, for example + `pipeline.knative.dev/v1alpha1`. + - [`kind`][kubernetes-overview] - Specify the `Task` resource object. + - [`metadata`][kubernetes-overview] - Specifies data to uniquely identify the + `Task` resource object, for example a `name`. + - [`spec`][kubernetes-overview] - Specifies the configuration information for + your `Task` resource object. `Task` steps must be defined through either of + the following fields: + - [`steps`](#steps) - Specifies one or more container images that you want + to run in your `Task`. +- Optional: + - [`inputs`](#inputs) - Specifies parameters and [`PipelineResources`](resources.md) + needed by your `Task` + - [`outputs`](#outputs) - Specifies [`PipelineResources`](resources.md) needed by your `Task` + - [`serviceAccount`](#service-account) - Specifies a `ServiceAccount` + resource object that enables your build to run with the defined + authentication information. + - [`volumes`](#volumes) - Specifies one or more volumes that you want to make + available to your build. + - [`timeout`](#timeout) - Specifies timeout after which the `Pipeline` will fail. + - [`nodeSelector`] - a selector which must be true for the pod to fit on a node. + The selector which must match a node's labels for the pod to be scheduled on that node. + More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ + - [`affinity`] - the pod's scheduling constraints. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature + +[kubernetes-overview]: + https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields + +The following example is a non-working sample where most of the possible +configuration fields are used: + +```yaml +apiVersion: build.knative.dev/v1alpha1 +kind: Task +metadata: + name: example-task-name +spec: + serviceAccountName: task-auth-example + source: + git: + url: https://github.com/example/build-example.git + revision: master + inputs: + resources: + - name: workspace + type: git + params: + - name: pathToDockerFile + description: The path to the dockerfile to build + default: /workspace/workspace/Dockerfile + outputs: + resources: + - name: builtImage + type: image + steps: + - name: ubuntu-example + image: ubuntu + args: ["ubuntu-build-example", "SECRETS-example.md"] + steps: + - image: gcr.io/example-builders/build-example + args: ['echo', '${inputs.resources.params.pathToDockerFile}'] + steps: + - name: dockerfile-pushexample + image: gcr.io/example-builders/push-example + args: ["push", "${outputs.resources.builtImage.url}"] + volumeMounts: + - name: docker-socket-example + mountPath: /var/run/docker.sock + volumes: + - name: example-volume + emptyDir: {} +``` + +### Steps + +The `steps` field is required. You define one or more `steps` fields to define +the body of a `Task`. + +Each `steps` in a `Task` must specify a container image that adheres to the +[container contract](./container-contract.md). For each of the `steps` fields, +or container images that you define: + +- The container images are run and evaluated in order, starting + from the top of the configuration file. +- Each container image runs until completion or until the first failure is + detected. + +### Inputs + +A `Task` can declare the inputs it needs, which can be either or both of: + +* [`parameters`](#parameters) +* [`input resources](#input-resources) + +#### Parameters + +Tasks can declare input parameters that must be supplied to the task during a +TaskRun. Some example use-cases of this include: + +- A Task that needs to know what compilation flags to use when building an + application. +- A Task that needs to know what to name a built artifact. +- A Task that supports several different strategies, and leaves the choice up to + the other. + +##### Usage + +The following example shows how Tasks can be parameterized, and these parameters +can be passed to the `Task` from a `TaskRun`. + +Input parameters in the form of `${inputs.params.foo}` are replaced inside of +the [`steps`](#steps) (see also [templating](#templating)). + +The following `Task` declares an input parameter called 'flags', and uses it in +the `steps.args` list. + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: Task +metadata: + name: task-with-parameters +spec: + inputs: + params: + - name: flags + value: -someflag + steps: + - name: build + image: my-builder + args: ["build", "--flags=${inputs.params.flags}"] +``` + +The following `TaskRun` supplies a value for `flags`: + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: TaskRun +metadata: + name: run-with-parameters +spec: + taskRef: + name: task-with-parameters + inputs: + params: + - name: "flags" + value: "foo=bar,baz=bat" +``` + +#### Input resources + +Use input [`PipelineResources`](resources.md) field to provide your +`Task` with data or context that is needed by your `Task`. + +Input resources, like source code (git) or artifacts, are dumped at path +`/workspace/task_resource_name` within a mounted +[volume](https://kubernetes.io/docs/concepts/storage/volumes/) +and is available to all [`steps`](#steps) of your `Task`. The path that the +resources are mounted at can be overridden with the `targetPath` value. + +### Outputs + +`Task` definitions can include inputs and outputs [`PipelineResource`](resources.md) +declarations. If specific set of resources are only declared in output then a copy +of resource to be uploaded or shared for next Task is expected to be present under +the path `/workspace/output/resource_name/`. + +```yaml +resources: + outputs: + name: storage-gcs + type: gcs +steps: + - image: objectuser/run-java-jar #https://hub.docker.com/r/objectuser/run-java-jar/ + command: [jar] + args: + ["-cvf", "-o", "/workspace/output/storage-gcs/", "projectname.war", "*"] + env: + - name: "FOO" + value: "world" +``` + +**note**: if the task is relying on output resource functionality then the containers +in the task `steps` field cannot mount anything in the path `/workspace/output`. + +In the following example Task `tar-artifact` resource is used both as input and +output so input resource is downloaded into directory `customworkspace`(as +specified in [`targetPath`](#targetpath)). Step `untar` extracts tar file into +`tar-scratch-space` directory , `edit-tar` adds a new file and last step +`tar-it-up` creates new tar file and places in `/workspace/customworkspace/` +directory. After execution of the Task steps, (new) tar file in directory +`/workspace/customworkspace` will be uploaded to the bucket defined in +`tar-artifact` resource definition. + +```yaml +resources: + inputs: + name: tar-artifact + targetPath: customworkspace + outputs: + name: tar-artifact +steps: + - name: untar + image: ubuntu + command: ["/bin/bash"] + args: ['-c', 'mkdir -p /workspace/tar-scratch-space/ && tar -xvf /workspace/customworkspace/rules_docker-master.tar -C /workspace/tar-scratch-space/'] + - name: edit-tar + image: ubuntu + command: ["/bin/bash"] + args: ['-c', 'echo crazy > /workspace/tar-scratch-space/rules_docker-master/crazy.txt'] + - name: tar-it-up + image: ubuntu + command: ["/bin/bash"] + args: ['-c', 'cd /workspace/tar-scratch-space/ && tar -cvf /workspace/customworkspace/rules_docker-master.tar rules_docker-master'] +``` + +### Controlling where resources are mounted + +Tasks can opitionally provide `targetPath` to initialize resource in specific +directory. If `targetPath` is set then resource will be initialized under +`/workspace/targetPath`. If `targetPath` is not specified then resource will be +initialized under `/workspace`. Following example demonstrates how git input +repository could be initialized in `$GOPATH` to run tests: + +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: Task +metadata: + name: task-with-input + namespace: default +spec: + inputs: + resources: + - name: workspace + type: git + targetPath: go/src/github.com/knative/build-pipeline + steps: + - name: unit-tests + image: golang + command: ["go"] + args: + - "test" + - "./..." + workingDir: "/workspace/go/src/github.com/knative/build-pipeline" + env: + - name: GOPATH + value: /workspace/go +``` + +### Volumes + +Specifies one or more +[volumes](https://kubernetes.io/docs/concepts/storage/volumes/) that you want to +make available to your `Task`, including all the [`steps`](#steps). Add volumes to +complement the volumes that are implicitly created for [input resources](#input-resources) +and [output resources](#outputs). + +For example, use volumes to accomplish one of the following common tasks: + +- [Mount a Kubernetes secret](./auth.md). +- Create an `emptyDir` volume to act as a cache for use across multiple build + steps. Consider using a persistent volume for inter-build caching. +- Mount a host's Docker socket to use a `Dockerfile` for container image builds. + **Note:** Building a container image using `docker build` on-cluster is _very + unsafe_. Use [kaniko](https://github.com/GoogleContainerTools/kaniko) instead. + This is used only for the purposes of demonstration. + +### Templating + +`Tasks` support templating using values from all [`inputs`](#inputs) and [`outputs`](#outputs), + +[`PipelineResources`](resources.md) can be referenced in a `Task` spec like this, where `` is the +Resource Name and `` is a one of the resource's `params`: + +```shell +${inputs.resources..} +``` + +Or for an output resource: + +```shell +${outputs.resources..} +``` + +To access an input parameter, replace `resources` with `params` as below: + +```shell +${inputs.params.} +``` + +## Examples + +Use these code snippets to help you understand how to define your `Tasks`. + +- [Example of image building and pushing](#example-task) +- [Mounting extra volumes](#using-an-extra-volume) +- [Authenticating with `ServiceAccount`](#using-a-serviceaccount) + +_Tip: See the collection of simple +[examples](https://github.com/knative/build-pipeline/tree/master/examples) for additional +code samples._ + +### Example Task + +For example, a `Task` to encapsulate a `Dockerfile` build might look +something like this: + +**Note:** Building a container image using `docker build` on-cluster is _very +unsafe_. Use [kaniko](https://github.com/GoogleContainerTools/kaniko) instead. +This is used only for the purposes of demonstration. + +```yaml +spec: + inputs: + resources: + - name: workspace + type: git + params: + # These may be overridden, but provide sensible defaults. + - name: directory + description: The directory containing the build context. + default: /workspace + - name: dockerfileName + description: The name of the Dockerfile + default: Dockerfile + outputs: + resources: + - name: builtImage + type: image + steps: + - name: dockerfile-build + image: gcr.io/cloud-builders/docker + workingDir: "${inputs.params.directory}" + args: + [ + "build", + "--no-cache", + "--tag", + "${outputs.resources.image}", + "--file", + "${inputs.params.dockerfileName}", + ".", + ] + volumeMounts: + - name: docker-socket + mountPath: /var/run/docker.sock + + - name: dockerfile-push + image: gcr.io/cloud-builders/docker + args: ["push", "${outputs.resources.image}"] + volumeMounts: + - name: docker-socket + mountPath: /var/run/docker.sock + + # As an implementation detail, this template mounts the host's daemon socket. + volumes: + - name: docker-socket + hostPath: + path: /var/run/docker.sock + type: Socket +``` + +#### Using an extra volume + +Mounting multiple volumes: + +```yaml +spec: + steps: + - image: ubuntu + entrypoint: ["bash"] + args: ["-c", "curl https://foo.com > /var/my-volume"] + volumeMounts: + - name: my-volume + mountPath: /var/my-volume + + - image: ubuntu + args: ["cat", "/etc/my-volume"] + volumeMounts: + - name: my-volume + mountPath: /etc/my-volume + + volumes: + - name: my-volume + emptyDir: {} +``` +--- + +Except as otherwise noted, the content of this page is licensed under the +[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/), +and code samples are licensed under the +[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/docs/tutorial.md b/docs/tutorial.md index 090a9e9ba08..3aa3235d316 100644 --- a/docs/tutorial.md +++ b/docs/tutorial.md @@ -119,12 +119,12 @@ In more common scenarios, a Task needs multiple steps with input and output resources to process. For example a Task could fetch source code from a GitHub repository and build a Docker image from it. -[`PipelinesResources`](concepts.md#pipelineresources) are used to define the +[`PipelinesResources`](resources.md) are used to define the artifacts that can be passed in and out of a task. There are a few system defined resource types ready to use, and the following are two examples of the resources commonly needed. -The [`git` resource](using.md#git-resource) represents a git repository with a +The [`git` resource](resources.md#git-resource) represents a git repository with a specific revision: ```yaml @@ -141,7 +141,7 @@ spec: value: https://github.com/GoogleContainerTools/skaffold ``` -The [`image` resource](using.md#image-resource) represents the image to be built +The [`image` resource](resources.md#image-resource) represents the image to be built by the task: ```yaml @@ -332,7 +332,7 @@ resource definition. A [`Pipeline`](concepts.md#pipelines) defines a list of tasks to execute in order, while also indicating if any outputs should be used as inputs of a -following task by using [the `from` field](using.md#from). The same templating +following task by using [the `from` field](pipelines.md#from). The same templating you used in tasks is also available in pipeline. For example: diff --git a/docs/using.md b/docs/using.md deleted file mode 100644 index 3b53a1fcbfe..00000000000 --- a/docs/using.md +++ /dev/null @@ -1,1048 +0,0 @@ -# How to use the Pipeline CRD - -- [How do I create a new Pipeline?](#creating-a-pipeline) -- [How do I make a Task?](#creating-a-task) -- [How do I make Resources?](#creating-resources) -- [How do I run a Pipeline?](#running-a-pipeline) -- [How do I run a Task on its own?](#running-a-task) -- [How do I ensure a Pipeline or Task stops if it runs for too long?](#timing-out-pipelines-and-tasks) -- [How do I troubleshoot a PipelineRun?](#troubleshooting) -- [How do I follow logs?](../test/logs/README.md) - -## Creating a Pipeline - -1. Create or copy [Task definitions](#creating-a-task) for the tasks you’d like - to run. Some can be generic and reused (e.g. building with - [Kaniko](https://github.com/GoogleContainerTools/kaniko)) and others will be - specific to your project (e.g. running your particular set of unit tests). -2. Create a `Pipeline` which expresses the Tasks you would like to run and what - [PipelineResources](#resources-in-a-pipeline) the Tasks need. Use - [`from`](#from) to express when the input of a `Task` should come from the - output of a previous `Task`. - -See [the example Pipeline](../examples/pipeline.yaml). - -### PipelineResources in a Pipeline - -In order for a `Pipeline` to interact with the outside world, it will probably need -[`PipelineResources`](#creating-pipelineresources) which will be given to -`Tasks` as inputs and outputs. - -Your `Pipeline` must declare the `PipelineResources` it needs in a `resources` -section in the `spec`, giving each a name which will be used to refer to these -`PipelineResources` in the `Tasks`. - -For example: - -```yaml -spec: - resources: - - name: my-repo - type: git - - name: my-image - type: image -``` - -These `PipelineResources` can then be given to `Task`s in the `Pipeline` as -inputs and outputs, for example: - -```yaml -spec: - #... - tasks: - - name: build-the-image - taskRef: - name: build-push - resources: - inputs: - - name: workspace - resource: my-repo - outputs: - - name: image - resource: my-image -``` - -### From - -Sometimes you will have `Tasks` that need to take as input the output of a -previous `Task`, for example, an image built by a previous `Task`. - -Express this dependency by adding `from` on `Resources` that your `Tasks` need. - -- The (optional) `from` key on an `input source` defines a set of previous - `PipelineTasks` (i.e. the named instance of a `Task`) in the `Pipeline` -- When the `from` key is specified on an input source, the version of the - resource that is from the defined list of tasks is used -- `from` can support fan in and fan out -- The name of the `PipelineResource` must correspond to a `PipelineResource` - from the `Task` that the referenced `PipelineTask` gives as an output - -For example see this `Pipeline` spec: - -```yaml -- name: build-app - taskRef: - name: build-push - resources: - outputs: - - name: image - resource: my-image -- name: deploy-app - taskRef: - name: deploy-kubectl - resources: - inputs: - - name: my-image - from: - - build-app -``` - -The resource `my-image` is expected to be given to the `deploy-app` `Task` from -the `build-app` `Task`. This means that the `PipelineResource` `my-image` must -also be declared as an output of `build-app`. - -For implementation details, see [the developer docs](docs/developers/README.md). - -## Creating a Task - -To create a Task, you must: - -- Define [parameters](task-parameters.md) (i.e. string inputs) for your `Task` -- Define the inputs and outputs of the `Task` as - [`Resources`](./Concepts.md#pipelineresources) -- Create a `Step` for each action you want to take in the `Task` - -`Steps` are images which comply with the -[container contract](#container-contract). - -### Container Contract - -Each container image used as a step in a [`Task`](#task) must comply with a -specific contract. - -#### Entrypoint - -When containers are run in a `Task`, the `entrypoint` of the container will be -overwritten with a custom binary. The plan is to use this custom binary for -controlling the execution of step containers ([#224](https://github.com/knative/build-pipeline/issues/224)) and log streaming -[#107](https://github.com/knative/build-pipeline/issues/107), though currently -it will write logs only to an [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) -(which cannot be read from after the pod has finished executing, so logs must be obtained -[via k8s logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/), -using a tool such as [test/logs/README.md](../test/logs/README.md), -or setting up an external system to consume logs). - -When `command` is not explicitly set, the controller will attempt to lookup the -entrypoint from the remote registry. - -Due to this metadata lookup, if you use a private image as a step inside a -`Task`, the build-pipeline controller needs to be able to access that registry. -The simplest way to accomplish this is to add a `.docker/config.json` at -`$HOME/.docker/config.json`, which will then be used by the controller when -performing the lookup - -For example, in the following Task with the images, -`gcr.io/cloud-builders/gcloud` and `gcr.io/cloud-builders/docker`, the -entrypoint would be resolved from the registry, resulting in the tasks running -`gcloud` and `docker` respectively. - -```yaml -spec: - steps: - - image: gcr.io/cloud-builders/gcloud - command: [gcloud] - - image: gcr.io/cloud-builders/docker - command: [docker] -``` - -However, if the steps specified a custom `command`, that is what would be used. - -```yaml -spec: - steps: - - image: gcr.io/cloud-builders/gcloud - command: - - bash - - -c - - echo "Hello!" -``` - -You can also provide `args` to the image's `command`: - -```yaml -steps: - - image: ubuntu - command: ["/bin/bash"] - args: ["-c", "echo hello $FOO"] - env: - - name: "FOO" - value: "world" -``` - -##### Configure Entrypoint image - -To run a step, the `pod` will need to pull an `Entrypoint` image. Maybe the -image is hard to pull in your environment, so we provide a way for you to -configure that by edit the `image`'s value in a configmap named -[`config-entrypoint`](./../config/config-entrypoint.yaml). - -### Resource sharing between tasks - -Pipeline `Tasks` are allowed to pass resources from previous `Tasks` via the -[`from`](#from) field. This feature is implemented using the two -following alternatives: - -- Persistent Volume Claims under the hood but however has an implication - that tasks cannot have any volume mounted under path `/pvc`. - -- [GCS storage bucket](https://cloud.google.com/storage/docs/json_api/v1/buckets) - A storage bucket can be configured using a ConfigMap named [`config-artifact-bucket`](./../config/config-artifact-bucket.yaml). - with the following attributes: -- `location`: the address of the bucket (for example gs://mybucket) -- `bucket.service.account.secret.name`: the name of the secret that will contain the credentials for the service account - with access to the bucket -- `bucket.service.account.secret.key`: the key in the secret with the required service account json -The bucket is configured with a retention policy of 24 hours after which files will be deleted - -### Outputs - -`Task` definitions can include inputs and outputs resource declaration. If -specific set of resources are only declared in output then a copy of resource to -be uploaded or shared for next Task is expected to be present under the path -`/workspace/output/resource_name/`. - -```yaml -resources: - outputs: - name: storage-gcs - type: gcs -steps: - - image: objectuser/run-java-jar #https://hub.docker.com/r/objectuser/run-java-jar/ - command: [jar] - args: - ["-cvf", "-o", "/workspace/output/storage-gcs/", "projectname.war", "*"] - env: - - name: "FOO" - value: "world" -``` - -**Note**: If the Task is relying on output resource functionality then the containers -in the Task `steps` field cannot mount anything in the path `/workspace/output`. - -If resource is declared in both input and output then input resource, then -destination path of input resource is used instead of -`/workspace/output/resource_name`. - -In the following example Task `tar-artifact` resource is used both as input and -output so input resource is downloaded into directory `customworkspace`(as -specified in [`targetPath`](#targetpath)). Step `untar` extracts tar file into -`tar-scratch-space` directory , `edit-tar` adds a new file and last step -`tar-it-up` creates new tar file and places in `/workspace/customworkspace/` -directory. After execution of the Task steps, (new) tar file in directory -`/workspace/customworkspace` will be uploaded to the bucket defined in -`tar-artifact` resource definition. - -```yaml -resources: - inputs: - name: tar-artifact - targetPath: customworkspace - outputs: - name: tar-artifact -steps: - - name: untar - image: ubuntu - command: ["/bin/bash"] - args: ['-c', 'mkdir -p /workspace/tar-scratch-space/ && tar -xvf /workspace/customworkspace/rules_docker-master.tar -C /workspace/tar-scratch-space/'] - - name: edit-tar - image: ubuntu - command: ["/bin/bash"] - args: ['-c', 'echo crazy > /workspace/tar-scratch-space/rules_docker-master/crazy.txt'] - - name: tar-it-up - image: ubuntu - command: ["/bin/bash"] - args: ['-c', 'cd /workspace/tar-scratch-space/ && tar -cvf /workspace/customworkspace/rules_docker-master.tar rules_docker-master'] -``` - -#### targetPath - -Tasks can opitionally provide `targetPath` to initialize resource in specific -directory. If `targetPath` is set then resource will be initialized under -`/workspace/targetPath`. If `targetPath` is not specified then resource will be -initialized under `/workspace`. Following example demonstrates how git input -repository could be initialized in `$GOPATH` to run tests: - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: Task -metadata: - name: task-with-input - namespace: default -spec: - inputs: - resources: - - name: workspace - type: git - targetPath: go/src/github.com/knative/build-pipeline - steps: - - name: unit-tests - image: golang - command: ["go"] - args: - - "test" - - "./..." - workingDir: "/workspace/go/src/github.com/knative/build-pipeline" - env: - - name: GOPATH - value: /workspace/go -``` - -### Conventions - -- `/workspace/`: - [`PipelineResources` are made available in this mounted dir](#creating-resources) -- `/builder/home`: This volume is exposed to steps via `$HOME`. -- Credentials attached to the Build's service account may be exposed as Git or - Docker credentials as outlined - [in the auth docs](https://github.com/knative/docs/blob/master/build/auth.md#authentication). - -### Templating - -Tasks support templating using values from all `inputs` and `outputs`. Both -`Resources` and `Params` can be used inside the `Spec` of a `Task`. - -`Resources` can be referenced in a `Task` spec like this, where `NAME` is the -Resource Name and `KEY` is one of `name`, `url`, `type` or `revision`: - -```shell -${inputs.resources.NAME.KEY} -``` - -To access a `Param`, replace `resources` with `params` as below: - -```shell -${inputs.params.NAME} -``` - -## Cluster Task - -Similar to Task, but with a cluster scope. - -In case of using a ClusterTask, the `TaskRef` kind should be added. The default -kind is Task which represents a namespaced Task - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: Pipeline -metadata: - name: demo-pipeline - namespace: default -spec: - tasks: - - name: build-skaffold-web - taskRef: - name: build-push - kind: ClusterTask - params: .... -``` - -## Running a Pipeline - -In order to run a Pipeline, you will need to provide: - -1. A Pipeline to run (see [creating a Pipeline](#creating-a-pipeline)) -2. The `PipelineResources` to use with this Pipeline. - -On its own, a `Pipeline` declares what `Tasks` to run, and dependencies between -`Task` inputs and outputs via [`from`](#from). When running a `Pipeline`, you -will need to specify the `PipelineResources` to use with it. One `Pipeline` may -need to be run with different `PipelineResources` in cases such as: - -- When triggering the run of a `Pipeline` against a pull request, the triggering - system must specify the commitish of a git `PipelineResource` to use -- When invoking a `Pipeline` manually against one's own setup, one will need to - ensure that one's own GitHub fork (via the git `PipelineResource`), image - registry (via the image `PipelineResource`) and Kubernetes cluster (via the - cluster `PipelineResource`). - -Specify the `PipelineResources` in the PipelineRun using the `resources` section -in the `PipelineRun` spec, for example: - -```yaml -spec: - resources: - - name: source-repo - resourceRef: - name: skaffold-git - - name: web-image - resourceRef: - name: skaffold-image-leeroy-web - - name: app-image - resourceRef: - name: skaffold-image-leeroy-app -``` - -Creation of a `PipelineRun` will trigger the creation of -[`TaskRuns`](#running-a-task) for each `Task` in your pipeline. - -See [the example PipelineRun](../examples/runs/pipeline-run.yaml). - -### Using a ServiceAccount - -In order to access to private resources, you may need to provide a -`ServiceAccount` to the build-pipeline objects. - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: Pipeline -metadata: - name: demo-pipeline - namespace: default -spec: - serviceAccount: test-build-robot-git-ssh - tasks: - - name: build-skaffold-web - taskRef: - name: build-push - kind: ClusterTask - params: .... -``` - -Where `serviceAccount: test-build-robot-git-ssh` references to the following -`ServiceAccount`: - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: test-build-robot-git-ssh -secrets: - - name: test-git-ssh -``` - -### Cancelling a PipelineRun - -In order to cancel a running pipeline (`PipelineRun`), you need to update its -spec to mark it as cancelled. Related `TaskRun` instances will be marked as -cancelled and running Pods will be deleted. - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: PipelineRun -metadata: - name: go-example-git -spec: - # […] - status: "PipelineRunCancelled" -``` - -## Running a Task - -### TaskRun with references - -To run a `Task`, create a new `TaskRun` which defines all inputs, outputs that -the `Task` needs to run. Below is an example where Task `read-task` is run by -creating `read-repo-run`. Task `read-task` has git input resource and TaskRun -`read-repo-run` includes reference to `go-example-git`. - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: TaskRun -metadata: - name: read-repo-run -spec: - taskRef: - name: read-task - trigger: - type: manual - inputs: - resources: - - name: workspace - resourceRef: - name: go-example-git ---- -apiVersion: pipeline.knative.dev/v1alpha1 -kind: PipelineResource -metadata: - name: go-example-git -spec: - type: git - params: - - name: url - value: https://github.com/pivotal-nader-ziada/gohelloworld ---- -apiVersion: pipeline.knative.dev/v1alpha1 -kind: Task -metadata: - name: read-task -spec: - inputs: - resources: - - name: workspace - type: git - steps: - - name: readme - image: ubuntu - command: - - /bin/bash - args: - - "cat README.md" -``` - -### Taskrun with embedded definitions - -Another way of running a Task is embedding the TaskSpec in the taskRun yaml. -This can be useful for "one-shot" style runs, or debugging. -TaskRun resource can include either Task reference or TaskSpec but not both. -Below is an example where `build-push-task-run-2` includes `TaskSpec` and no -reference to Task. - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: PipelineResource -metadata: - name: go-example-git -spec: - type: git - params: - - name: url - value: https://github.com/pivotal-nader-ziada/gohelloworld ---- -apiVersion: pipeline.knative.dev/v1alpha1 -kind: TaskRun -metadata: - name: build-push-task-run-2 -spec: - trigger: - type: manual - inputs: - resources: - - name: workspace - resourceRef: - name: go-example-git - taskSpec: - inputs: - resources: - - name: workspace - type: git - steps: - - name: build-and-push - image: gcr.io/kaniko-project/executor - command: - - /kaniko/executor - args: - - --destination=gcr.io/my-project/gohelloworld -``` - -Input and output resources can also be embedded without creating Pipeline -Resources. TaskRun resource can include either a Pipeline Resource reference or -a Pipeline Resource Spec but not both. Below is an example where Git Pipeline -Resource Spec is provided as input for TaskRun `read-repo`. - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: TaskRun -metadata: - name: read-repo -spec: - taskRef: - name: read-task - trigger: - type: manual - inputs: - resources: - - name: workspace - resourceSpec: - type: git - params: - - name: url - value: https://github.com/pivotal-nader-ziada/gohelloworld -``` - -**Note**: TaskRun can embed both TaskSpec and resource spec at the same time. -See [example](../examples/run/task-run-resource-spec.yaml) TaskRun. The -`TaskRun` will also serve as a record of the history of the invocations of the -`Task`. - -For more sample taskruns check out [example folder](../examples/run/). - -### Using a ServiceAccount - -In order to access to private resources, you may need to provide a -`ServiceAccount` to the build-pipeline objects. - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: TaskRun -metadata: - name: build-push-task-run-2 -spec: - serviceAccount: test-build-robot-git-ssh - trigger: - type: manual - inputs: - resources: - - name: workspace - resourceRef: - name: go-example-git -``` - -Where `serviceAccount: test-build-robot-git-ssh` references to the following -`ServiceAccount`: - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: test-build-robot-git-ssh -secrets: - - name: test-git-ssh -``` - -### Cancelling a TaskRun - -In order to cancel a running task (`TaskRun`), you need to update its spec to -mark it as cancelled. Running Pods will be deleted. - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: TaskRun -metadata: - name: go-example-git -spec: - # […] - status: "TaskRunCancelled" -``` - -### Using custom paths - -When specifying input and output `PipelineResources`, you can optionally specify -`paths` for each resource. `paths` will be used by `TaskRun` as the resource's -new source paths i.e., copy the resource from specified list of paths. `TaskRun` -expects the folder and contents to be already present in specified paths. -`paths` feature could be used to provide extra files or altered version of -existing resource before execution of steps. - -Output resource includes name and reference to pipeline resource and optionally -`paths`. `paths` will be used by `TaskRun` as the resource's new destination -paths i.e., copy the resource entirely to specified paths. `TaskRun` will be -responsible for creating required directories and copying contents over. `paths` -feature could be used to inspect the results of taskrun after execution of -steps. - -`paths` feature for input and output resource is heavily used to pass same -version of resources across tasks in context of pipelinerun. - -In the following example, task and taskrun are defined with input resource, -output resource and step which builds war artifact. After execution of -taskrun(`volume-taskrun`), `custom` volume will have entire resource -`java-git-resource` (including the war artifact) copied to the destination path -`/custom/workspace/`. - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: Task -metadata: - name: volume-task - namespace: default -spec: - generation: 1 - inputs: - resources: - - name: workspace - type: git - steps: - - name: build-war - image: objectuser/run-java-jar #https://hub.docker.com/r/objectuser/run-java-jar/ - command: jar - args: ["-cvf", "projectname.war", "*"] - volumeMounts: - - name: custom-volume - mountPath: /custom -``` - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: TaskRun -metadata: - name: volume-taskrun - namespace: default -spec: - taskRef: - name: volume-task - inputs: - resources: - - name: workspace - resourceRef: - name: java-git-resource - outputs: - resources: - - name: workspace - paths: - - /custom/workspace/ - resourceRef: - name: java-git-resource - volumes: - - name: custom-volume - emptyDir: {} -``` - -## Creating PipelineResources - -The following `PipelineResources` are currently supported: - -- [Git resource](#git-resource) -- [Image resource](#image-resource) -- [Cluster resource](#cluster-resource) -- [Storage resource](#storage-resource) - -When used as inputs, these resources will be made available in a mounted -directory called `/workspace` at the path `/workspace/`. - -### Git Resource - -Git resource represents a [git](https://git-scm.com/) repository, that contains -the source code to be built by the pipeline. Adding the git resource as an input -to a Task will clone this repository and allow the Task to perform the required -actions on the contents of the repo. - -To create a git resource using the `PipelineResource` CRD: - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: PipelineResource -metadata: - name: wizzbang-git - namespace: default -spec: - type: git - params: - - name: url - value: https://github.com/wizzbangcorp/wizzbang.git - - name: revision - value: master -``` - -Params that can be added are the following: - -1. `url`: represents the location of the git repository, you can use this to - change the repo, e.g. [to use a fork](#using-a-fork) -1. `revision`: Git - [revision](https://git-scm.com/docs/gitrevisions#_specifying_revisions) - (branch, tag, commit SHA or ref) to clone. You can use this to control what - commit [or branch](#using-a-branch) is used. _If no revision is specified, - the resource will default to `latest` from `master`._ - -#### Using a fork - -The `Url` parameter can be used to point at any git repository, for example to -use a GitHub fork at master: - -```yaml -spec: - type: git - params: - - name: url - value: https://github.com/bobcatfish/wizzbang.git -``` - -#### Using a branch - -The `revision` can be any -[git commit-ish (revision)](https://git-scm.com/docs/gitrevisions#_specifying_revisions). -You can use this to create a git `PipelineResource` that points at a branch, for -example: - -```yaml -spec: - type: git - params: - - name: url - value: https://github.com/wizzbangcorp/wizzbang.git - - name: revision - value: some_awesome_feature -``` - -To point at a pull request, you can use -[the pull requests's branch](https://help.github.com/articles/checking-out-pull-requests-locally/): - -```yaml -spec: - type: git - params: - - name: url - value: https://github.com/wizzbangcorp/wizzbang.git - - name: revision - value: refs/pull/52525/head -``` - -### Image Resource - -An Image resource represents an image that lives in a remote repository. It is -usually used as [a `Task` `output`](concepts.md#task) for `Tasks` that build -images. This allows the same `Tasks` to be used to generically push to any -registry. - -Params that can be added are the following: - -1. `url`: The complete path to the image, including the registry and the image - tag -2. `digest`: The - [image digest](https://success.docker.com/article/images-tagging-vs-digests) - which uniquely identifies a particular build of an image with a particular - tag. - -For example: - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: PipelineResource -metadata: - name: kritis-resources-image - namespace: default -spec: - type: image - params: - - name: url - value: gcr.io/staging-images/kritis -``` - -### Cluster Resource - -Cluster Resource represents a Kubernetes cluster other than the current cluster -the pipeline CRD is running on. A common use case for this resource is to deploy -your application/function on different clusters. - -The resource will use the provided parameters to create a -[kubeconfig](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) -file that can be used by other steps in the pipeline Task to access the target -cluster. The kubeconfig will be placed in -`/workspace//kubeconfig` on your Task container - -The Cluster resource has the following parameters: - -- Name: The name of the Resource is also given to cluster, will be used in the - kubeconfig and also as part of the path to the kubeconfig file -- URL (required): Host url of the master node -- Username (required): the user with access to the cluster -- Password: to be used for clusters with basic auth -- Token: to be used for authentication, if present will be used ahead of the - password -- Insecure: to indicate server should be accessed without verifying the TLS - certificate. -- CAData (required): holds PEM-encoded bytes (typically read from a root - certificates bundle). - -Note: Since only one authentication technique is allowed per user, either a -token or a password should be provided, if both are provided, the password will -be ignored. - -The following example shows the syntax and structure of a Cluster Resource: - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: PipelineResource -metadata: - name: test-cluster -spec: - type: cluster - params: - - name: url - value: https://10.10.10.10 # url to the cluster master node - - name: cadata - value: LS0tLS1CRUdJTiBDRVJ..... - - name: token - value: ZXlKaGJHY2lPaU.... -``` - -For added security, you can add the sensitive information in a Kubernetes -[Secret](https://kubernetes.io/docs/concepts/configuration/secret/) and populate -the kubeconfig from them. - -For example, create a secret like the following example: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: target-cluster-secrets -data: - cadatakey: LS0tLS1CRUdJTiBDRVJUSUZ......tLQo= - tokenkey: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbX....M2ZiCg== -``` - -and then apply secrets to the cluster resource - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: PipelineResource -metadata: - name: test-cluster -spec: - type: cluster - params: - - name: url - value: https://10.10.10.10 - - name: username - value: admin - secrets: - - fieldName: token - secretKey: tokenKey - secretName: target-cluster-secrets - - fieldName: cadata - secretKey: cadataKey - secretName: target-cluster-secrets -``` - -Example usage of the cluster resource in a Task: - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: Task -metadata: - name: deploy-image - namespace: default -spec: - inputs: - resources: - - name: workspace - type: git - - name: dockerimage - type: image - - name: testcluster - type: cluster - steps: - - name: deploy - image: image-wtih-kubectl - command: ["bash"] - args: - - "-c" - - kubectl --kubeconfig - /workspace/${inputs.resources.testCluster.Name}/kubeconfig --context - ${inputs.resources.testCluster.Name} apply -f /workspace/service.yaml' -``` - -### Storage Resource - -Storage resource represents blob storage, that contains either an object or -directory. Adding the storage resource as an input to a Task will download the -blob and allow the Task to perform the required actions on the contents of the -blob. Blob storage type -[Google Cloud Storage](https://cloud.google.com/storage/)(gcs) is supported as -of now. - -#### GCS Storage Resource - -GCS Storage resource points to -[Google Cloud Storage](https://cloud.google.com/storage/) blob. - -To create a GCS type of storage resource using the `PipelineResource` CRD: - -```yaml -apiVersion: pipeline.knative.dev/v1alpha1 -kind: PipelineResource -metadata: - name: wizzbang-storage - namespace: default -spec: - type: storage - params: - - name: type - value: gcs - - name: location - value: gs://some-bucket -``` - -Params that can be added are the following: - -1. `location`: represents the location of the blob storage. -2. `type`: represents the type of blob storage. Currently there is - implementation for only `gcs`. -3. `dir`: represents whether the blob storage is a directory or not. By default - storage artifact is considered not a directory. - - If artifact is a directory then `-r`(recursive) flag is used to copy all - files under source directory to GCS bucket. Eg: - `gsutil cp -r source_dir gs://some-bucket` - - If artifact is a single file like zip, tar files then copy will be only 1 - level deep(no recursive). It will not trigger copy of sub directories in - source directory. Eg: `gsutil cp source.tar gs://some-bucket.tar`. - -Private buckets can also be configured as storage resources. To access GCS -private buckets, service accounts are required with correct permissions. -The `secrets` field on the storage resource is used for configuring this -information. -Below is an example on how to create a storage resource with service account. - -1. Refer to - [official documentation](https://cloud.google.com/compute/docs/access/service-accounts) - on how to create service accounts and configuring IAM permissions to access - bucket. -2. Create a Kubernetes secret from downloaded service account json key - - ```bash - $ kubectl create secret generic bucket-sa --from-file=./service_account.json - ``` - -3. To access GCS private bucket environment variable - [`GOOGLE_APPLICATION_CREDENTIALS`](https://cloud.google.com/docs/authentication/production) - should be set so apply above created secret to the GCS storage resource under - `fieldName` key. - - ```yaml - apiVersion: pipeline.knative.dev/v1alpha1 - kind: PipelineResource - metadata: - name: wizzbang-storage - namespace: default - spec: - type: storage - params: - - name: type - value: gcs - - name: location - value: gs://some-private-bucket - - name: dir - value: "directory" - secrets: - - fieldName: GOOGLE_APPLICATION_CREDENTIALS - secretName: bucket-sa - secretKey: service_account.json - ``` - -## Timing Out Pipelines and Tasks - -If you want to ensure that your `Pipeline` or `Task` will be stopped if it runs -past a certain duration, you can use the `Timeout` field on either `Pipeline` -or `Task`. In both cases, add the following to the `spec`: - -```yaml -spec: - timeout: 5m -``` - -## Troubleshooting - -All objects created by the build-pipeline controller show the lineage of where -that object came from through labels, all the way down to the individual build. - -There are a common set of labels that are set on objects. For `TaskRun` objects, -it will receive two labels: - -- `pipeline.knative.dev/pipeline`, which will be set to the name of the owning - pipeline -- `pipeline.knative.dev/pipelineRun`, which will be set to the name of the - PipelineRun - -When the underlying `Build` is created, it will receive each of the `pipeline` -and `pipelineRun` labels, as well as `pipeline.knative.dev/taskRun` which will -contain the `TaskRun` which caused the `Build` to be created. - -In the end, this allows you to easily find the `Builds` and `TaskRuns` that are -associated with a given pipeline. - -For example, to find all `Builds` created by a `Pipeline` named "build-image", -you could use the following command: - -```shell -kubectl get builds --all-namespaces -l pipeline.knative.dev/pipeline=build-image -``` diff --git a/examples/README.md b/examples/README.md index badf6b5f1a3..4a16f63b45f 100644 --- a/examples/README.md +++ b/examples/README.md @@ -47,7 +47,7 @@ of the `default` namespace with #### Simple Tasks -The [Tasks](../docs/Concepts.md#task) used by the simple examples are: +The [Tasks](../docs/tasks.md) used by the simple examples are: - [build-task.yaml](build-task.yaml): Builds an image via [kaniko](https://github.com/GoogleContainerTools/kaniko) and pushes it to @@ -58,8 +58,8 @@ The [Tasks](../docs/Concepts.md#task) used by the simple examples are: #### Simple Runs The [run](./run/) directory contains an example -[TaskRun](../docs/Concepts.md#taskrun) and an example -[PipelineRun](../docs/Concepts.md#pipelinerun): +[TaskRun](../docs/taskruns.md) and an example +[PipelineRun](../docs/pipelineruns.md): - [task-run.yaml](./run/task-run.yaml) shows an example of how to manually run the `build-push` task @@ -77,13 +77,13 @@ demonstrates how the outputs of a `Task` can be given as inputs to the next 1. Running a `Task` that writes to a `PipelineResource` 2. Running a `Task` that reads the written value from the `PipelineResource` -The [`Output`](../docs/Concepts.md#outputs) of the first `Task` is given as an -[`Input`](../docs/Concepts.md#inputs) to the next `Task` thanks to the -[`from`](../docs/using.md#from) clause. +The [`Output`](../docs/tasks.md#outputs) of the first `Task` is given as an +[`Input`](../docs/tasks.md#inputs) to the next `Task` thanks to the +[`from`](../docs/pipelines.md#from) clause. #### Output Tasks -The two [Tasks](../docs/Concepts.md#task) used by the output Pipeline are in +The two [Tasks](../docs/tasks.md) used by the output Pipeline are in [output-tasks.yaml](output-tasks.yaml): - `create-file`: Writes "some stuff" to a predefined path in the `workspace` @@ -92,12 +92,12 @@ The two [Tasks](../docs/Concepts.md#task) used by the output Pipeline are in `workspace` `git` `PipelineResource` These work together when combined in a `Pipeline` because the git resource used -as an [`Output`](../docs/Concepts.md#outputs) of the `create-file` `Task` can be -an [`Input`](../docs/Concepts.md#inputs) of the `check-stuff-file-exists` +as an [`Output`](../docs/tasks.md#outputs) of the `create-file` `Task` can be +an [`Input`](../docs/tasks.md#inputs) of the `check-stuff-file-exists` `Task`. #### Output Runs The [run](./run/) directory contains an example -[PipelineRun](../docs/Concepts.md#pipelinerun) that invokes this `Pipeline` in +[PipelineRun](../docs/pipelineruns.md) that invokes this `Pipeline` in [`run/output-pipeline-run.yaml`](./run/output-pipeline-run.yaml). diff --git a/hack/release.md b/hack/release.md index 9a4460e06a5..a5c86c9c681 100644 --- a/hack/release.md +++ b/hack/release.md @@ -50,6 +50,28 @@ Examples: _Note: only Knative admins can create versioned releases._ +Creating and releasing a versioned release has two steps: + +1. [Update the published docs](#update-the-published-docs) +2. [Cut the release](#cut-the-release) + +### Update the published docs + +The official docs for the latest release of `build-pipelines` live in +[the knative docs repo](https://github.com/knative/docs) at +[`knative/docs/pipeline`](https://github.com/knative/docs/tree/master/pipeline). + +These docs correspond to the most recent release of `build-pipeline`. There is +a living version of these docs in this repo, which correspond to the functionality +at `HEAD`. Part of creating a release involves copying the living version of these +files to `knative/docs`. + +Specifically copy all of the docs in the first level `docs/` folder (i.e. not a +recursive copy) to [`knative/docs/pipeline`](https://github.com/knative/docs/tree/master/pipeline) +and open a PR for review there. + +### Cut the release + To specify a versioned release to be cut, you must use the `--version` flag. Versioned releases are usually built against a branch in the Knative Build Pipeline repository, specified by the `--branch` flag. diff --git a/docs/images/pipe.png b/pipe.png similarity index 100% rename from docs/images/pipe.png rename to pipe.png