diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index c185badfbbd..d6d6611b503 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -46,8 +46,9 @@ This means that most PRs should include both: Our contributors are made up of: -* A core group of OWNERS (defined in [OWNERS](OWNERS)) who can [approve PRs](#getting-sign-off) -* Any and all other contributors! +- A core group of OWNERS (defined in [OWNERS](OWNERS)) who can + [approve PRs](#getting-sign-off) +- Any and all other contributors! If you are interested in becoming an OWNER, take a look at the [approver requirements](https://github.com/knative/docs/blob/master/community/ROLES.md#approver) @@ -55,28 +56,32 @@ and follow up with an existing OWNER [on slack](https://knative.slack.com/)). ### OWNER review process -Reviewers will be auto-assigned by [Prow](#pull-request-process) from the [OWNERS file](OWNERS), -which acts as suggestions for which `OWNERS` should review the PR. Your review requests can -be viewed at [https://github.com/pulls/review-requested](https://github.com/pulls/review-requested). +Reviewers will be auto-assigned by [Prow](#pull-request-process) from the +[OWNERS file](OWNERS), which acts as suggestions for which `OWNERS` should +review the PR. Your review requests can be viewed at +[https://github.com/pulls/review-requested](https://github.com/pulls/review-requested). -`OWNERS` who prepared to give the final `/approve` and `/lgtm` for a PR should use the `assign` -button to indicate they are willing to own the review of that PR. +`OWNERS` who prepared to give the final `/approve` and `/lgtm` for a PR should +use the `assign` button to indicate they are willing to own the review of that +PR. ### Project stuff -As the project progresses we define [milestones](https://help.github.com/articles/about-milestones/) -to indicate what functionality the OWNERS are focusing on. +As the project progresses we define +[milestones](https://help.github.com/articles/about-milestones/) to indicate +what functionality the OWNERS are focusing on. -If you are interested in contributing but not an OWNER, feel free to take on something from the -milestone but [be aware of the contributor SLO](#contributor-slo). +If you are interested in contributing but not an OWNER, feel free to take on +something from the milestone but +[be aware of the contributor SLO](#contributor-slo). You can see more details (including a burndown, issues in epics, etc.) on our [zenhub board](https://app.zenhub.com/workspaces/pipelines-5bc61a054b5806bc2bed4fb2/boards?repos=146641150). To see this board, you must: - Ask [an OWNER](OWNER) via [slack](https://knative.slack.com) for an invitation -- Add [the zenhub browser extension](https://www.zenhub.com/extension) to see new info via GitHub - (or just use zenhub.com directly) +- Add [the zenhub browser extension](https://www.zenhub.com/extension) to see + new info via GitHub (or just use zenhub.com directly) ## Pull Request Process @@ -202,12 +207,13 @@ like to work on it** and we will consider it assigned to you. If you declare your intention to work on an issue: -- If it becomes urgent that the issue be resolved (e.g. critical bug or nearing the - end of [a milestone](#project-stuff)), someone else may take over (apologies if this happens!!) -- If you do not respond to queries on an issue within about 3 days and someone else - wants to work on your issue, we will assume you are no longer interested in working - on it and it is fair game to assign to someone else (no worries at all if this - happens, we don't mind!) +- If it becomes urgent that the issue be resolved (e.g. critical bug or nearing + the end of [a milestone](#project-stuff)), someone else may take over + (apologies if this happens!!) +- If you do not respond to queries on an issue within about 3 days and someone + else wants to work on your issue, we will assume you are no longer interested + in working on it and it is fair game to assign to someone else (no worries at + all if this happens, we don't mind!) ## Roadmap @@ -215,8 +221,8 @@ The project's roadmap for 2019 is published [here](./roadmap-2019.md). ## API compatibility policy -The API compatibility policy (i.e. the policy for making backwards incompatible API changes) -can be found [here](api_compatibility_policy.md). +The API compatibility policy (i.e. the policy for making backwards incompatible +API changes) can be found [here](api_compatibility_policy.md). ## Contact diff --git a/DEVELOPMENT.md b/DEVELOPMENT.md index 0e96d58a3c9..40c97b0cc2b 100644 --- a/DEVELOPMENT.md +++ b/DEVELOPMENT.md @@ -52,12 +52,15 @@ You must install these tools: For development. 1. [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/): For interacting with your kube cluster - -Your [`$GOPATH`] setting is critical for `ko apply` to function properly: a successful run will typically involve building pushing images instead of only configuring Kubernetes resources. + +Your [`$GOPATH`] setting is critical for `ko apply` to function properly: a +successful run will typically involve building pushing images instead of only +configuring Kubernetes resources. ## Kubernetes cluster -Docker for Desktop using an edge version has been proven to work for both developing and running Knative. Your Kubernetes version must be 1.11 or later. +Docker for Desktop using an edge version has been proven to work for both +developing and running Knative. Your Kubernetes version must be 1.11 or later. To setup a cluster with GKE: @@ -82,7 +85,9 @@ environment variables (we recommend adding them to your `.bashrc`): 1. `$GOPATH/bin` on `PATH`: This is so that tooling installed via `go get` will work properly. 1. `KO_DOCKER_REPO`: The docker repository to which developer images should be - pushed (e.g. `gcr.io/[gcloud-project]`). You can also run a local registry and set `KO_DOCKER_REPO` to reference the registry (e.g. at `localhost:5000/myknativeimages`). + pushed (e.g. `gcr.io/[gcloud-project]`). You can also run a local registry + and set `KO_DOCKER_REPO` to reference the registry (e.g. at + `localhost:5000/myknativeimages`). `.bashrc` example: diff --git a/README.md b/README.md index 1cc5bb02cbe..3cbf928e90a 100644 --- a/README.md +++ b/README.md @@ -30,8 +30,8 @@ Pipelines are **Typed**: - [Read about it](/docs/README.md) - Look at [some examples](/examples) -_See [our API compatibility policy](api_compatibility_policy.md) for info on -the stability level of the API._ +_See [our API compatibility policy](api_compatibility_policy.md) for info on the +stability level of the API._ ## Want to contribute? diff --git a/api_compatibility_policy.md b/api_compatibility_policy.md index a7dbcc31e0f..d24095b99b3 100644 --- a/api_compatibility_policy.md +++ b/api_compatibility_policy.md @@ -1,25 +1,28 @@ # API compatibility policy -This document proposes a policy regarding making API updates to the CRDs in this repo. -Users should be able to build on the APIs in these projects with a clear idea of what -they can rely on and what should be considered in progress and therefore likely to change. +This document proposes a policy regarding making API updates to the CRDs in this +repo. Users should be able to build on the APIs in these projects with a clear +idea of what they can rely on and what should be considered in progress and +therefore likely to change. For these purposes the CRDs are divided into three groups: -* [`Build` and `BuildTemplate`] - from https://github.com/knative/build -* [`TaskRun`, `Task`, and `ClusterTask`] - "more stable" -* [`PipelineRun`, `Pipeline` and `PipelineResource`] - "less stable" +- [`Build` and `BuildTemplate`] - from https://github.com/knative/build +- [`TaskRun`, `Task`, and `ClusterTask`] - "more stable" +- [`PipelineRun`, `Pipeline` and `PipelineResource`] - "less stable" -The use of `alpha`, `beta` and `GA` in this document is meant to correspond roughly to +The use of `alpha`, `beta` and `GA` in this document is meant to correspond +roughly to [the kubernetes API deprecation policies](https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-a-flag-or-cli). ## What does compatibility mean here? -This policy is about changes to the APIs of the [CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/), +This policy is about changes to the APIs of the +[CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/), aka the spec of the CRD objects themselves. -A backwards incompatible change would be a change that requires a user to update existing -instances of these CRDs in clusters where they are deployed (after +A backwards incompatible change would be a change that requires a user to update +existing instances of these CRDs in clusters where they are deployed (after [automatic conversion is available](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/#webhook-conversion) this process may become less painful). @@ -31,55 +34,65 @@ The current process would look something like: 4. Update the backups with the new spec 5. Deploy the updated backups -_This policy does not yet cover other functionality which could be considered part of the API, -but isn’t part of the CRD definition (e.g. a contract re. files expected to be written in -certain locations by a resulting pod)._ +_This policy does not yet cover other functionality which could be considered +part of the API, but isn’t part of the CRD definition (e.g. a contract re. files +expected to be written in certain locations by a resulting pod)._ ## `Build` and `BuildTemplate` -The CRD types [`Build`](https://github.com/knative/docs/blob/master/build/builds.md) and +The CRD types +[`Build`](https://github.com/knative/docs/blob/master/build/builds.md) and [`BuildTemplate`](https://github.com/knative/docs/blob/master/build/build-templates.md) should be considered frozen at beta and only additive changes should be allowed. -Support will continue for the `Build` type for the foreseeable future, particularly to support -embedding of Build resources within [`knative/serving`](https://github.com/knative/serving) objects. +Support will continue for the `Build` type for the foreseeable future, +particularly to support embedding of Build resources within +[`knative/serving`](https://github.com/knative/serving) objects. ## `TaskRun`, `Task`, and `ClusterTask` -The CRD types [`Task`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#task), +The CRD types +[`Task`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#task), [`ClusterTask`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#clustertask), -and [`TaskRun`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#taskrun) -should be considered `alpha`, however these types are more stable than `Pipeline`, `PipelineRun`, -and `PipelineResource`. +and +[`TaskRun`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#taskrun) +should be considered `alpha`, however these types are more stable than +`Pipeline`, `PipelineRun`, and `PipelineResource`. ### Possibly `beta` in Knative 0.6 -The status of these types will be revisited ~2 releases (i.e. Knative 0.6) and see if they can be -promoted to `beta`. +The status of these types will be revisited ~2 releases (i.e. Knative 0.6) and +see if they can be promoted to `beta`. -Once these types are promoted to `beta`, any backwards incompatible changes must be introduced in -a backwards compatible manner first, with a deprecation warning in the release notes, for at least one -full release before the backward incompatible change is made. +Once these types are promoted to `beta`, any backwards incompatible changes must +be introduced in a backwards compatible manner first, with a deprecation warning +in the release notes, for at least one full release before the backward +incompatible change is made. There are two reasons for this: -- `Task` and `TaskRun` are considered upgraded versions of `Build`, meaning that the APIs benefit - from a significant amount of user feedback and iteration -- Going forward users should use `TaskRun` and `Task` instead of `Build` and `BuildTemplate`, - those users should not expect the API to be changed on them without warning -The exception to this is that `PipelineResource` definitions can be embedded in `TaskRuns`, -and since the `PipelineResource` definitions are considered less stable, changes to the spec of -the embedded `PipelineResource` can be introduced between releases. +- `Task` and `TaskRun` are considered upgraded versions of `Build`, meaning that + the APIs benefit from a significant amount of user feedback and iteration +- Going forward users should use `TaskRun` and `Task` instead of `Build` and + `BuildTemplate`, those users should not expect the API to be changed on them + without warning + +The exception to this is that `PipelineResource` definitions can be embedded in +`TaskRuns`, and since the `PipelineResource` definitions are considered less +stable, changes to the spec of the embedded `PipelineResource` can be introduced +between releases. ## `PipelineRun`, `Pipeline` and `PipelineResource` -The CRD types [`Pipeline`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#pipeline), +The CRD types +[`Pipeline`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#pipeline), [`PipelineRun`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#pipelinerun) -and [`PipelineResource`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#pipelineresources) -should be considered `alpha`, i.e. the API should be considered unstable. Backwards incompatible -changes can be introduced between releases, however they must include a backwards incompatibility -warning in the release notes. - -The reason for this is not yet having enough user feedback to commit to the APIs as -they currently exist. Once significant user input has been given into the API design, we can -upgrade these CRDs to `beta`. +and +[`PipelineResource`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#pipelineresources) +should be considered `alpha`, i.e. the API should be considered unstable. +Backwards incompatible changes can be introduced between releases, however they +must include a backwards incompatibility warning in the release notes. + +The reason for this is not yet having enough user feedback to commit to the APIs +as they currently exist. Once significant user input has been given into the API +design, we can upgrade these CRDs to `beta`. diff --git a/docs/README.md b/docs/README.md index 4b7b08f39c7..be4d10be3ae 100644 --- a/docs/README.md +++ b/docs/README.md @@ -20,8 +20,8 @@ High level details of this design: triggered by events or by manually creating [PipelineRuns](pipelineruns.md) - [Tasks](tasks.md) can exist and be invoked completely independently of [Pipelines](pipelines.md); they are highly cohesive and loosely coupled -- [Tasks](tasks.md) can depend on artifacts, output and parameters created by other - tasks. +- [Tasks](tasks.md) can depend on artifacts, output and parameters created by + other tasks. - [Tasks](tasks.md) can be invoked via [TaskRuns](taskruns.md) - [PipelineResources](#pipelineresources) are the artifacts used as inputs and outputs of Tasks. @@ -48,8 +48,9 @@ components: ## Try it out -* Follow along with [the tutorial](tutorial.md) -* Look at [the examples](https://github.com/knative/build-pipeline/tree/master/examples) +- Follow along with [the tutorial](tutorial.md) +- Look at + [the examples](https://github.com/knative/build-pipeline/tree/master/examples) ## Related info diff --git a/docs/auth.md b/docs/auth.md index 1d6467402aa..36b4b6f597d 100644 --- a/docs/auth.md +++ b/docs/auth.md @@ -41,10 +41,11 @@ aggregates them into their respective files in `$HOME`. # This is non-standard, but its use is encouraged to make this more secure. known_hosts: ``` - `pipeline.knative.dev/git-0` in the example above specifies which web address - these credentials belong to. See - [Guiding Credential Selection](#guiding-credential-selection) below for - more information. + + `pipeline.knative.dev/git-0` in the example above specifies which web + address these credentials belong to. See + [Guiding Credential Selection](#guiding-credential-selection) below for more + information. 1. Generate the value of `ssh-privatekey` by copying the value of (for example) `cat ~/.ssh/id_rsa | base64`. @@ -52,7 +53,8 @@ aggregates them into their respective files in `$HOME`. 1. Copy the value of `cat ~/.ssh/known_hosts | base64` to the `known_hosts` field. -1. Next, direct a `ServiceAccount` to use this `Secret` (in `serviceaccount.yaml`): +1. Next, direct a `ServiceAccount` to use this `Secret` (in + `serviceaccount.yaml`): ```yaml apiVersion: v1 @@ -65,16 +67,16 @@ aggregates them into their respective files in `$HOME`. 1. Then use that `ServiceAccount` in your `TaskRun` (in `run.yaml`): - ```yaml - apiVersion: pipeline.knative.dev/v1alpha1 - kind: TaskRun - metadata: - name: build-push-task-run-2 - spec: - serviceAccount: buid-bot - taskRef: - name: build-push - ``` +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: TaskRun +metadata: + name: build-push-task-run-2 +spec: + serviceAccount: buid-bot + taskRef: + name: build-push +``` 1. Or use that `ServiceAccount` in your `PipelineRun` (in `run.yaml`): @@ -117,12 +119,14 @@ to authenticate when retrieving any `PipelineResources`. username: password: ``` - `pipeline.knative.dev/git-0` in the example above specifies which web address - these credentials belong to. See - [Guiding Credential Selection](#guiding-credential-selection) below for - more information. -1. Next, direct a `ServiceAccount` to use this `Secret` (in `serviceaccount.yaml`): + `pipeline.knative.dev/git-0` in the example above specifies which web + address these credentials belong to. See + [Guiding Credential Selection](#guiding-credential-selection) below for more + information. + +1. Next, direct a `ServiceAccount` to use this `Secret` (in + `serviceaccount.yaml`): ```yaml apiVersion: v1 @@ -135,16 +139,16 @@ to authenticate when retrieving any `PipelineResources`. 1. Then use that `ServiceAccount` in your `TaskRun` (in `run.yaml`): - ```yaml - apiVersion: pipeline.knative.dev/v1alpha1 - kind: TaskRun - metadata: - name: build-push-task-run-2 - spec: - serviceAccount: buid-bot - taskRef: - name: build-push - ``` +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: TaskRun +metadata: + name: build-push-task-run-2 +spec: + serviceAccount: buid-bot + taskRef: + name: build-push +``` 1. Or use that `ServiceAccount` in your `PipelineRun` (in `run.yaml`): @@ -168,7 +172,8 @@ to authenticate when retrieving any `PipelineResources`. When this `Run` executes, before steps execute, a `~/.gitconfig` will be generated containing the credentials configured in the `Secret`, and these -credentials are then used to authenticate when retrieving any `PipelineResources`. +credentials are then used to authenticate when retrieving any +`PipelineResources`. ## Basic authentication (Docker) @@ -187,12 +192,14 @@ credentials are then used to authenticate when retrieving any `PipelineResources username: password: ``` + `pipeline.knative.dev/docker-0` in the example above specifies which web address these credentials belong to. See - [Guiding Credential Selection](#guiding-credential-selection) below for - more information. + [Guiding Credential Selection](#guiding-credential-selection) below for more + information. -1. Next, direct a `ServiceAccount` to use this `Secret` (in `serviceaccount.yaml`): +1. Next, direct a `ServiceAccount` to use this `Secret` (in + `serviceaccount.yaml`): ```yaml apiVersion: v1 @@ -205,16 +212,16 @@ credentials are then used to authenticate when retrieving any `PipelineResources 1. Then use that `ServiceAccount` in your `TaskRun` (in `run.yaml`): - ```yaml - apiVersion: pipeline.knative.dev/v1alpha1 - kind: TaskRun - metadata: - name: build-push-task-run-2 - spec: - serviceAccount: buid-bot - taskRef: - name: build-push - ``` +```yaml +apiVersion: pipeline.knative.dev/v1alpha1 +kind: TaskRun +metadata: + name: build-push-task-run-2 +spec: + serviceAccount: buid-bot + taskRef: + name: build-push +``` 1. Or use that `ServiceAccount` in your `PipelineRun` (in `run.yaml`): @@ -236,9 +243,10 @@ credentials are then used to authenticate when retrieving any `PipelineResources kubectl apply --filename secret.yaml serviceaccount.yaml run.yaml ``` -When the `Run` executes, before steps execute, a `~/.docker/config.json` will -be generated containing the credentials configured in the `Secret`, and these -credentials are then used to authenticate when retrieving any `PipelineResources`. +When the `Run` executes, before steps execute, a `~/.docker/config.json` will be +generated containing the credentials configured in the `Secret`, and these +credentials are then used to authenticate when retrieving any +`PipelineResources`. ### Guiding credential selection @@ -285,8 +293,8 @@ This describes an SSH key secret that should be used to access Git repos at github.com only. Credential annotation keys must begin with `pipeline.knative.dev/docker-` or -`pipeline.knative.dev/git-`, and the value describes the URL of the host with which -to use the credential. +`pipeline.knative.dev/git-`, and the value describes the URL of the host with +which to use the credential. ## Implementation details diff --git a/docs/developers/README.md b/docs/developers/README.md index d1394fa96fb..897974b4a46 100644 --- a/docs/developers/README.md +++ b/docs/developers/README.md @@ -16,18 +16,21 @@ on path `/pvc` by PipelineRun. adds a step to copy from PVC to directory path `/pvc/previous_task/resource_name`. -Another alternatives is to use a GCS storage bucket to share the artifacts. This can -be configured using a ConfigMap with the name `config-artifact-bucket` with the following attributes: +Another alternatives is to use a GCS storage bucket to share the artifacts. This +can be configured using a ConfigMap with the name `config-artifact-bucket` with +the following attributes: - location: the address of the bucket (for example gs://mybucket) -- bucket.service.account.secret.name: the name of the secret that will contain the credentials for the service account - with access to the bucket -- bucket.service.account.secret.key: the key in the secret with the required service account json. - The bucket is recommended to be configured with a retention policy after which files will be deleted. - -Both options provide the same functionality to the pipeline. The choice is based on the infrastructure used, -for example in some Kubernetes platforms, the creation of a persistent volume could be slower than -uploading/downloading files to a bucket, or if the the cluster is running in multiple zones, the access to +- bucket.service.account.secret.name: the name of the secret that will contain + the credentials for the service account with access to the bucket +- bucket.service.account.secret.key: the key in the secret with the required + service account json. The bucket is recommended to be configured with a + retention policy after which files will be deleted. + +Both options provide the same functionality to the pipeline. The choice is based +on the infrastructure used, for example in some Kubernetes platforms, the +creation of a persistent volume could be slower than uploading/downloading files +to a bucket, or if the the cluster is running in multiple zones, the access to the persistent volume can fail. ### How are inputs handled? diff --git a/docs/pipelineruns.md b/docs/pipelineruns.md index 7d7d30ec0d9..5916ca9bcb1 100644 --- a/docs/pipelineruns.md +++ b/docs/pipelineruns.md @@ -4,8 +4,8 @@ This document defines `PipelineRuns` and their capabilities. On its own, a [`Pipeline`](pipelines.md) declares what [`Tasks`](tasks.md) to run, and dependencies between [`Task`](tasks.md) inputs and outputs via -[`from`](pipelines.md#from). To execute the `Tasks` in the `Pipeline`, you -must create a `PipelineRun`. +[`from`](pipelines.md#from). To execute the `Tasks` in the `Pipeline`, you must +create a `PipelineRun`. Creation of a `PipelineRun` will trigger the creation of [`TaskRuns`](taskruns.md) for each `Task` in your pipeline. @@ -31,31 +31,33 @@ following fields: `PipelineRun` resource object, for example a `name`. - [`spec`][kubernetes-overview] - Specifies the configuration information for your `PipelineRun` resource object. - - `pipelineRef` or `taskSpec`- Specifies the [`Pipeline`](pipelines.md) you want - to run. - - `trigger` - Provides data about what created this `PipelineRun`. The only type - at this time is `manual`. + - `pipelineRef` or `taskSpec`- Specifies the [`Pipeline`](pipelines.md) you + want to run. + - `trigger` - Provides data about what created this `PipelineRun`. The only + type at this time is `manual`. - Optional: - - [`resources`](#resources) - Specifies which [`PipelineResources`](resources.md) - to use for this `PipelineRun`. - - [`serviceAccount`](#service-account) - Specifies a `ServiceAccount` - resource object that enables your build to run with the defined - authentication information. + + - [`resources`](#resources) - Specifies which + [`PipelineResources`](resources.md) to use for this `PipelineRun`. + - [`serviceAccount`](#service-account) - Specifies a `ServiceAccount` resource + object that enables your build to run with the defined authentication + information. - `timeout` - Specifies timeout after which the `PipelineRun` will fail. - - [`nodeSelector`] - a selector which must be true for the pod to fit on a node. - The selector which must match a node's labels for the pod to be scheduled on that node. - More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ - - [`affinity`] - the pod's scheduling constraints. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature - + - [`nodeSelector`] - a selector which must be true for the pod to fit on a + node. The selector which must match a node's labels for the pod to be + scheduled on that node. More info: + https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ + - [`affinity`] - the pod's scheduling constraints. More info: + https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature + [kubernetes-overview]: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields - ### Resources When running a [`Pipeline`](pipelines.md), you will need to specify the -[`PipelineResources`](resources.md) to use with it. One `Pipeline` may -need to be run with different `PipelineResources` in cases such as: +[`PipelineResources`](resources.md) to use with it. One `Pipeline` may need to +be run with different `PipelineResources` in cases such as: - When triggering the run of a `Pipeline` against a pull request, the triggering system must specify the commitish of a git `PipelineResource` to use diff --git a/docs/pipelines.md b/docs/pipelines.md index ccd2534bc31..78e948d7984 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -23,20 +23,22 @@ following fields: - [`metadata`][kubernetes-overview] - Specifies data to uniquely identify the `Pipeline` resource object, for example a `name`. - [`spec`][kubernetes-overview] - Specifies the configuration information for - your `Pipeline` resource object. In order for a `Pipeline` to do anything, the - spec must include: - - [`tasks`](#pipeline-tasks) - Specifies which `Tasks` to run and how to run them + your `Pipeline` resource object. In order for a `Pipeline` to do anything, + the spec must include: + - [`tasks`](#pipeline-tasks) - Specifies which `Tasks` to run and how to run + them - Optional: - - [`resources`](#declared-resources) - Specifies which [`PipelineResources`](resources.md) - of which types the `Pipeline` will be using in its [Tasks](#pipeline-tasks) + - [`resources`](#declared-resources) - Specifies which + [`PipelineResources`](resources.md) of which types the `Pipeline` will be + using in its [Tasks](#pipeline-tasks) [kubernetes-overview]: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields ### Declared resources -In order for a `Pipeline` to interact with the outside world, it will probably need -[`PipelineResources`](#creating-pipelineresources) which will be given to +In order for a `Pipeline` to interact with the outside world, it will probably +need [`PipelineResources`](#creating-pipelineresources) which will be given to `Tasks` as inputs and outputs. Your `Pipeline` must declare the `PipelineResources` it needs in a `resources` @@ -57,26 +59,25 @@ spec: ### Parameters `Pipeline`s can declare input parameters that must be supplied to the `Pipeline` -during a `PipelineRun`. Pipeline parameters can be used to replace template +during a `PipelineRun`. Pipeline parameters can be used to replace template values in [`PipelineTask` parameters' values](#pipeline-tasks). -Parameters name are limited to alpha-numeric characters, `-` and `_` -and can only start with alpha characters and `_`. For example, -`fooIs-Bar_` is a valid parameter name, `barIsBa$` or `0banana` are -not. +Parameters name are limited to alpha-numeric characters, `-` and `_` and can +only start with alpha characters and `_`. For example, `fooIs-Bar_` is a valid +parameter name, `barIsBa$` or `0banana` are not. #### Usage -The following example shows how `Pipeline`s can be parameterized, and these +The following example shows how `Pipeline`s can be parameterized, and these parameters can be passed to the `Pipeline` from a `PipelineRun`. -Input parameters in the form of `${params.foo}` are replaced inside of -the [`PipelineTask` parameters' values](#pipeline-tasks) -(see also [templating](tasks.md#templating)). +Input parameters in the form of `${params.foo}` are replaced inside of the +[`PipelineTask` parameters' values](#pipeline-tasks) (see also +[templating](tasks.md#templating)). The following `Pipeline` declares an input parameter called 'context', and uses -it in the `PipelineTask`'s parameter. The `description` and `default` fields -for a parameter are optional, and if the `default` field is specified and this +it in the `PipelineTask`'s parameter. The `description` and `default` fields for +a parameter are optional, and if the `default` field is specified and this `Pipeline` is used by a `PipelineRun` without specifying a value for 'context', the `default` value will be used. @@ -95,10 +96,10 @@ spec: taskRef: name: build-push params: - - name: pathToDockerFile - value: Dockerfile - - name: pathToContext - value: "${params.context}" + - name: pathToDockerFile + value: Dockerfile + - name: pathToContext + value: "${params.context}" ``` The following `PipelineRun` supplies a value for `context`: @@ -118,18 +119,19 @@ spec: ### Pipeline Tasks -A `Pipeline` will execute a sequence of [`Tasks`](tasks.md) in the order they are declared in. -At a minimum, this declaration must include a reference to the `Task`: +A `Pipeline` will execute a sequence of [`Tasks`](tasks.md) in the order they +are declared in. At a minimum, this declaration must include a reference to the +`Task`: ```yaml - tasks: - - name: build-the-image - taskRef: - name: build-push +tasks: + - name: build-the-image + taskRef: + name: build-push ``` -[Declared `PipelineResources`](#declared-resources) can be given to `Task`s in the `Pipeline` as -inputs and outputs, for example: +[Declared `PipelineResources`](#declared-resources) can be given to `Task`s in +the `Pipeline` as inputs and outputs, for example: ```yaml spec: @@ -151,14 +153,14 @@ spec: ```yaml spec: tasks: - - name: build-skaffold-web - taskRef: - name: build-push - params: - - name: pathToDockerFile - value: Dockerfile - - name: pathToContext - value: /workspace/examples/microservices/leeroy-web + - name: build-skaffold-web + taskRef: + name: build-push + params: + - name: pathToDockerFile + value: Dockerfile + - name: pathToContext + value: /workspace/examples/microservices/leeroy-web ``` #### from @@ -202,7 +204,8 @@ also be declared as an output of `build-app`. ## Examples -For complete examples, see [the examples folder](https://github.com/knative/build-pipeline/tree/master/examples). +For complete examples, see +[the examples folder](https://github.com/knative/build-pipeline/tree/master/examples). --- diff --git a/docs/resources.md b/docs/resources.md index b3cf33cacc8..9f10998cf0a 100644 --- a/docs/resources.md +++ b/docs/resources.md @@ -7,7 +7,8 @@ A `Task` can have multiple inputs and outputs. For example: -- A `Task`'s input could be a GitHub source which contains your application code. +- A `Task`'s input could be a GitHub source which contains your application + code. - A `Task`'s output can be your application container image which can be then deployed in a cluster. - A `Task`'s output can be a jar file to be uploaded to a storage bucket. @@ -28,14 +29,16 @@ following fields: - Required: - [`apiVersion`][kubernetes-overview] - Specifies the API version, for example `pipeline.knative.dev/v1alpha1`. - - [`kind`][kubernetes-overview] - Specify the `PipelineResource` resource object. + - [`kind`][kubernetes-overview] - Specify the `PipelineResource` resource + object. - [`metadata`][kubernetes-overview] - Specifies data to uniquely identify the `PipelineResource` object, for example a `name`. - [`spec`][kubernetes-overview] - Specifies the configuration information for - your `PipelineResource` resource object. + your `PipelineResource` resource object. - [`type`](#resource-types) - Specifies the `type` of the `PipelineResource` - Optional: - - [`params`](#resource-types) - Parameters which are specific to each type of `PipelineResource` + - [`params`](#resource-types) - Parameters which are specific to each type of + `PipelineResource` [kubernetes-overview]: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields @@ -318,10 +321,10 @@ Params that can be added are the following: source directory. Eg: `gsutil cp source.tar gs://some-bucket.tar`. Private buckets can also be configured as storage resources. To access GCS -private buckets, service accounts are required with correct permissions. -The `secrets` field on the storage resource is used for configuring this -information. -Below is an example on how to create a storage resource with service account. +private buckets, service accounts are required with correct permissions. The +`secrets` field on the storage resource is used for configuring this +information. Below is an example on how to create a storage resource with +service account. 1. Refer to [official documentation](https://cloud.google.com/compute/docs/access/service-accounts) diff --git a/docs/taskruns.md b/docs/taskruns.md index ade7bae45e8..8e6d2293aad 100644 --- a/docs/taskruns.md +++ b/docs/taskruns.md @@ -3,9 +3,9 @@ Use the `TaskRun` resource object to create and run on-cluster processes to completion. -To create a `TaskRun` in Knative, you must first create a [`Task`](tasks.md) which -specifies one or more container images that you have implemented to perform and -complete a task. +To create a `TaskRun` in Knative, you must first create a [`Task`](tasks.md) +which specifies one or more container images that you have implemented to +perform and complete a task. A `TaskRun` runs until all `steps` have completed or until a failure occurs. @@ -35,31 +35,35 @@ following fields: `TaskRun` resource object, for example a `name`. - [`spec`][kubernetes-overview] - Specifies the configuration information for your `TaskRun` resource object. - - [`taskRef` or `taskSpec`](#specifying-a-task) - Specifies the details of the - [`Task`](tasks.md) you want to run - - `trigger` - Provides data about what created this `TaskRun`. Can be `manual` - if you are creating this manually, or has a value of `PipelineRun` if it is - created as part of a [`PipelineRun`](pipelineruns.md) + - [`taskRef` or `taskSpec`](#specifying-a-task) - Specifies the details of + the [`Task`](tasks.md) you want to run + - `trigger` - Provides data about what created this `TaskRun`. Can be + `manual` if you are creating this manually, or has a value of + `PipelineRun` if it is created as part of a + [`PipelineRun`](pipelineruns.md) - Optional: - - [`serviceAccount`](#service-account) - Specifies a `ServiceAccount` - resource object that enables your build to run with the defined - authentication information. + + - [`serviceAccount`](#service-account) - Specifies a `ServiceAccount` resource + object that enables your build to run with the defined authentication + information. - [`inputs`] - Specifies [input parameters](#input-parameters) and [input resources](#providing-resources) - [`outputs`] - Specifies [output resources](#providing-resources) - `timeout` - Specifies timeout after which the `TaskRun` will fail. - - [`nodeSelector`] - a selector which must be true for the pod to fit on a node. - The selector which must match a node's labels for the pod to be scheduled on that node. - More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ - - [`affinity`] - the pod's scheduling constraints. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature - + - [`nodeSelector`] - a selector which must be true for the pod to fit on a + node. The selector which must match a node's labels for the pod to be + scheduled on that node. More info: + https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ + - [`affinity`] - the pod's scheduling constraints. More info: + https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature + [kubernetes-overview]: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields ### Specifying a task -Since a `TaskRun` is an invocation of a [`Task`](tasks.md), you must specify what -`Task` to invoke. +Since a `TaskRun` is an invocation of a [`Task`](tasks.md), you must specify +what `Task` to invoke. You can do this by providing a reference to an existing `Task`: @@ -89,8 +93,8 @@ spec: ### Input parameters -If a `Task` has [`parameters`](tasks.md#parameters), you can specify values for them -using the `input` section: +If a `Task` has [`parameters`](tasks.md#parameters), you can specify values for +them using the `input` section: ```yaml spec: @@ -105,10 +109,11 @@ If a parameter does not have a default value, it must be specified. ### Providing resources If a `Task` requires [input resources](tasks.md#input-resources) or -[output resources](tasks.md#output-resources), they must be provided -to run the `Task`. +[output resources](tasks.md#output-resources), they must be provided to run the +`Task`. -They can be provided via references to existing [`PipelineResources`](resources.md): +They can be provided via references to existing +[`PipelineResources`](resources.md): ```yaml spec: @@ -136,9 +141,9 @@ spec: ### Service Account Specifies the `name` of a `ServiceAccount` resource object. Use the -`serviceAccount` field to run your `Task` with the privileges of the -specified service account. If no `serviceAccount` field is specified, your -`Task` runs using the +`serviceAccount` field to run your `Task` with the privileges of the specified +service account. If no `serviceAccount` field is specified, your `Task` runs +using the [`default` service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server) that is in the [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) @@ -295,10 +300,9 @@ spec: ### Example with embedded specs Another way of running a Task is embedding the TaskSpec in the taskRun yaml. -This can be useful for "one-shot" style runs, or debugging. -TaskRun resource can include either Task reference or TaskSpec but not both. -Below is an example where `build-push-task-run-2` includes `TaskSpec` and no -reference to Task. +This can be useful for "one-shot" style runs, or debugging. TaskRun resource can +include either Task reference or TaskSpec but not both. Below is an example +where `build-push-task-run-2` includes `TaskSpec` and no reference to Task. ```yaml apiVersion: pipeline.knative.dev/v1alpha1 @@ -363,14 +367,15 @@ spec: ``` **Note**: TaskRun can embed both TaskSpec and resource spec at the same time. -The `TaskRun` will also serve as a record of the history of the invocations of the -`Task`. +The `TaskRun` will also serve as a record of the history of the invocations of +the `Task`. ### Example Task Reuse -For the sake of illustrating re-use, here are several example [`TaskRuns`](taskrun.md) -(including referenced [`PipelineResources`](resource.md)) instantiating the [`Task` -(`dockerfile-build-and-push`) in the `Task` example docs](tasks.md#example-task). +For the sake of illustrating re-use, here are several example +[`TaskRuns`](taskrun.md) (including referenced +[`PipelineResources`](resource.md)) instantiating the +[`Task` (`dockerfile-build-and-push`) in the `Task` example docs](tasks.md#example-task). Build `mchmarny/rester-tester`: @@ -378,11 +383,11 @@ Build `mchmarny/rester-tester`: # The PipelineResource metadata: name: mchmarny-repo -spec: +spec: type: git params: - - name: url - value: https://github.com/mchmarny/rester-tester.git + - name: url + value: https://github.com/mchmarny/rester-tester.git ``` ```yaml @@ -396,8 +401,8 @@ spec: resourceRef: name: mchmarny-repo params: - - name: IMAGE - value: gcr.io/my-project/rester-tester + - name: IMAGE + value: gcr.io/my-project/rester-tester ``` Build `googlecloudplatform/cloud-builder`'s `wget` builder: @@ -406,11 +411,11 @@ Build `googlecloudplatform/cloud-builder`'s `wget` builder: # The PipelineResource metadata: name: cloud-builder-repo -spec: +spec: type: git params: - - name: url - value: https://github.com/googlecloudplatform/cloud-builders.git + - name: url + value: https://github.com/googlecloudplatform/cloud-builders.git ``` ```yaml @@ -437,11 +442,11 @@ Build `googlecloudplatform/cloud-builder`'s `docker` builder with `17.06.1`: # The PipelineResource metadata: name: cloud-builder-repo -spec: +spec: type: git params: - - name: url - value: https://github.com/googlecloudplatform/cloud-builders.git + - name: url + value: https://github.com/googlecloudplatform/cloud-builders.git ``` ```yaml @@ -477,8 +482,8 @@ spec: serviceAccount: test-task-robot-git-ssh inputs: resources: - - name: workspace - type: git + - name: workspace + type: git steps: - name: config image: ubuntu @@ -517,11 +522,10 @@ data: known_hosts: Z2l0aHViLmNvbSBzc2g.....[example] ``` - Specifies the `name` of a `ServiceAccount` resource object. Use the -`serviceAccount` field to run your `Task` with the privileges of the -specified service account. If no `serviceAccount` field is specified, your -`Task` runs using the +`serviceAccount` field to run your `Task` with the privileges of the specified +service account. If no `serviceAccount` field is specified, your `Task` runs +using the [`default` service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server) that is in the [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) diff --git a/docs/tasks.md b/docs/tasks.md index c81ca8a6594..90e7b250958 100644 --- a/docs/tasks.md +++ b/docs/tasks.md @@ -1,8 +1,8 @@ # Tasks -A `Task` (or a [`ClusterTask`](#clustertask)) is a collection of sequential steps you would want to run as part of -your continuous integration flow. A task will run inside a container on your -cluster. +A `Task` (or a [`ClusterTask`](#clustertask)) is a collection of sequential +steps you would want to run as part of your continuous integration flow. A task +will run inside a container on your cluster. A `Task` declares: @@ -10,7 +10,8 @@ A `Task` declares: - [Outputs](#outputs) - [Steps](#steps) -A `Task` is available within a namespace, and `ClusterTask` is available across entire Kubernetes cluster. +A `Task` is available within a namespace, and `ClusterTask` is available across +entire Kubernetes cluster. --- @@ -46,7 +47,8 @@ spec: params: .... ``` -A `Task` functions exactly like a `ClusterTask`, and as such all references to `Task` below are also describing `ClusterTask`. +A `Task` functions exactly like a `ClusterTask`, and as such all references to +`Task` below are also describing `ClusterTask`. ## Syntax @@ -65,9 +67,10 @@ following fields: - [`steps`](#steps) - Specifies one or more container images that you want to run in your `Task`. - Optional: - - [`inputs`](#inputs) - Specifies parameters and [`PipelineResources`](resources.md) - needed by your `Task` - - [`outputs`](#outputs) - Specifies [`PipelineResources`](resources.md) needed by your `Task` + - [`inputs`](#inputs) - Specifies parameters and + [`PipelineResources`](resources.md) needed by your `Task` + - [`outputs`](#outputs) - Specifies [`PipelineResources`](resources.md) needed + by your `Task` - [`volumes`](#volumes) - Specifies one or more volumes that you want to make available to your build. @@ -128,8 +131,8 @@ Each `steps` in a `Task` must specify a container image that adheres to the [container contract](./container-contract.md). For each of the `steps` fields, or container images that you define: -- The container images are run and evaluated in order, starting - from the top of the configuration file. +- The container images are run and evaluated in order, starting from the top of + the configuration file. - Each container image runs until completion or until the first failure is detected. @@ -137,8 +140,8 @@ or container images that you define: A `Task` can declare the inputs it needs, which can be either or both of: -* [`parameters`](#parameters) -* [input resources](#input-resources) +- [`parameters`](#parameters) +- [input resources](#input-resources) #### Parameters @@ -151,10 +154,9 @@ TaskRun. Some example use-cases of this include: - A Task that supports several different strategies, and leaves the choice up to the other. -Parameters name are limited to alpha-numeric characters, `-` and `_` -and can only start with alpha characters and `_`. For example, -`fooIs-Bar_` is a valid parameter name, `barIsBa$` or `0banana` are -not. +Parameters name are limited to alpha-numeric characters, `-` and `_` and can +only start with alpha characters and `_`. For example, `fooIs-Bar_` is a valid +parameter name, `barIsBa$` or `0banana` are not. ##### Usage @@ -201,21 +203,22 @@ spec: #### Input resources -Use input [`PipelineResources`](resources.md) field to provide your -`Task` with data or context that is needed by your `Task`. +Use input [`PipelineResources`](resources.md) field to provide your `Task` with +data or context that is needed by your `Task`. Input resources, like source code (git) or artifacts, are dumped at path `/workspace/task_resource_name` within a mounted -[volume](https://kubernetes.io/docs/concepts/storage/volumes/) -and is available to all [`steps`](#steps) of your `Task`. The path that the -resources are mounted at can be overridden with the `targetPath` value. +[volume](https://kubernetes.io/docs/concepts/storage/volumes/) and is available +to all [`steps`](#steps) of your `Task`. The path that the resources are mounted +at can be overridden with the `targetPath` value. ### Outputs -`Task` definitions can include inputs and outputs [`PipelineResource`](resources.md) -declarations. If specific set of resources are only declared in output then a copy -of resource to be uploaded or shared for next Task is expected to be present under -the path `/workspace/output/resource_name/`. +`Task` definitions can include inputs and outputs +[`PipelineResource`](resources.md) declarations. If specific set of resources +are only declared in output then a copy of resource to be uploaded or shared for +next Task is expected to be present under the path +`/workspace/output/resource_name/`. ```yaml resources: @@ -232,8 +235,9 @@ steps: value: "world" ``` -**note**: if the task is relying on output resource functionality then the containers -in the task `steps` field cannot mount anything in the path `/workspace/output`. +**note**: if the task is relying on output resource functionality then the +containers in the task `steps` field cannot mount anything in the path +`/workspace/output`. In the following example Task `tar-artifact` resource is used both as input and output so input resource is downloaded into directory `customworkspace`(as @@ -303,9 +307,9 @@ spec: Specifies one or more [volumes](https://kubernetes.io/docs/concepts/storage/volumes/) that you want to -make available to your `Task`, including all the [`steps`](#steps). Add volumes to -complement the volumes that are implicitly created for [input resources](#input-resources) -and [output resources](#outputs). +make available to your `Task`, including all the [`steps`](#steps). Add volumes +to complement the volumes that are implicitly created for +[input resources](#input-resources) and [output resources](#outputs). For example, use volumes to accomplish one of the following common tasks: @@ -319,10 +323,12 @@ For example, use volumes to accomplish one of the following common tasks: ### Templating -`Tasks` support templating using values from all [`inputs`](#inputs) and [`outputs`](#outputs), +`Tasks` support templating using values from all [`inputs`](#inputs) and +[`outputs`](#outputs), -[`PipelineResources`](resources.md) can be referenced in a `Task` spec like this, where `` is the -Resource Name and `` is a one of the resource's `params`: +[`PipelineResources`](resources.md) can be referenced in a `Task` spec like +this, where `` is the Resource Name and `` is a one of the resource's +`params`: ```shell ${inputs.resources..} @@ -348,13 +354,13 @@ Use these code snippets to help you understand how to define your `Tasks`. - [Mounting extra volumes](#using-an-extra-volume) _Tip: See the collection of simple -[examples](https://github.com/knative/build-pipeline/tree/master/examples) for additional -code samples._ +[examples](https://github.com/knative/build-pipeline/tree/master/examples) for +additional code samples._ ### Example Task -For example, a `Task` to encapsulate a `Dockerfile` build might look -something like this: +For example, a `Task` to encapsulate a `Dockerfile` build might look something +like this: **Note:** Building a container image using `docker build` on-cluster is _very unsafe_. Use [kaniko](https://github.com/GoogleContainerTools/kaniko) instead. @@ -364,20 +370,20 @@ This is used only for the purposes of demonstration. spec: inputs: resources: - - name: workspace - type: git + - name: workspace + type: git params: - # These may be overridden, but provide sensible defaults. - - name: directory - description: The directory containing the build context. - default: /workspace - - name: dockerfileName - description: The name of the Dockerfile - default: Dockerfile + # These may be overridden, but provide sensible defaults. + - name: directory + description: The directory containing the build context. + default: /workspace + - name: dockerfileName + description: The name of the Dockerfile + default: Dockerfile outputs: resources: - - name: builtImage - type: image + - name: builtImage + type: image steps: - name: dockerfile-build image: gcr.io/cloud-builders/docker @@ -435,6 +441,7 @@ spec: - name: my-volume emptyDir: {} ``` + --- Except as otherwise noted, the content of this page is licensed under the diff --git a/docs/tutorial.md b/docs/tutorial.md index 0ce83425c13..a080e2adc48 100644 --- a/docs/tutorial.md +++ b/docs/tutorial.md @@ -3,9 +3,8 @@ Welcome to the Pipeline tutorial! This tutorial will walk you through creating and running some simple -[`Task`](tasks.md), [`Pipeline`](pipelines.md) and running -them by creating [`TaskRuns`](taskruns.md) and -[`PipelineRuns`](pipelineruns.md). +[`Task`](tasks.md), [`Pipeline`](pipelines.md) and running them by creating +[`TaskRuns`](taskruns.md) and [`PipelineRuns`](pipelineruns.md). - [Creating a hello world `Task`](#task) - [Creating a hello world `Pipeline`](#pipeline) @@ -18,8 +17,8 @@ The main objective of the Pipeline CRDs is to run your Task individually or as a part of a Pipeline. Every task runs as a Pod on your Kubernetes cluster with each step as its own container. -A [`Task`](tasks.md) defines the work that needs to be executed, for -example the following is a simple task that will echo hello world: +A [`Task`](tasks.md) defines the work that needs to be executed, for example the +following is a simple task that will echo hello world: ```yaml apiVersion: pipeline.knative.dev/v1alpha1 @@ -38,8 +37,8 @@ spec: The `steps` are a series of commands to be sequentially executed by the task. -A [`TaskRun`](taskruns.md) runs the `Task` you defined. Here is a -simple example of a `TaskRun` you can use to execute your task: +A [`TaskRun`](taskruns.md) runs the `Task` you defined. Here is a simple example +of a `TaskRun` you can use to execute your task: ```yaml apiVersion: pipeline.knative.dev/v1alpha1 @@ -119,13 +118,13 @@ In more common scenarios, a Task needs multiple steps with input and output resources to process. For example a Task could fetch source code from a GitHub repository and build a Docker image from it. -[`PipelinesResources`](resources.md) are used to define the -artifacts that can be passed in and out of a task. There are a few system -defined resource types ready to use, and the following are two examples of the -resources commonly needed. +[`PipelinesResources`](resources.md) are used to define the artifacts that can +be passed in and out of a task. There are a few system defined resource types +ready to use, and the following are two examples of the resources commonly +needed. -The [`git` resource](resources.md#git-resource) represents a git repository with a -specific revision: +The [`git` resource](resources.md#git-resource) represents a git repository with +a specific revision: ```yaml apiVersion: pipeline.knative.dev/v1alpha1 @@ -141,8 +140,8 @@ spec: value: https://github.com/GoogleContainerTools/skaffold ``` -The [`image` resource](resources.md#image-resource) represents the image to be built -by the task: +The [`image` resource](resources.md#image-resource) represents the image to be +built by the task: ```yaml apiVersion: pipeline.knative.dev/v1alpha1 @@ -330,10 +329,10 @@ resource definition. # Pipeline -A [`Pipeline`](pipelines.md) defines a list of tasks to execute in -order, while also indicating if any outputs should be used as inputs of a -following task by using [the `from` field](pipelines.md#from). The same templating -you used in tasks is also available in pipeline. +A [`Pipeline`](pipelines.md) defines a list of tasks to execute in order, while +also indicating if any outputs should be used as inputs of a following task by +using [the `from` field](pipelines.md#from). The same templating you used in +tasks is also available in pipeline. For example: @@ -429,8 +428,7 @@ spec: - "${inputs.params.path}" ``` -To run the `Pipeline`, create a [`PipelineRun`](pipelineruns.md) as -follows: +To run the `Pipeline`, create a [`PipelineRun`](pipelineruns.md) as follows: ```yaml apiVersion: pipeline.knative.dev/v1alpha1 diff --git a/docs/using.md b/docs/using.md index b0f79b73493..9994d36ddf1 100644 --- a/docs/using.md +++ b/docs/using.md @@ -24,8 +24,8 @@ See [the example Pipeline](../examples/pipeline.yaml). ### PipelineResources in a Pipeline -In order for a `Pipeline` to interact with the outside world, it will probably need -[`PipelineResources`](#creating-pipelineresources) which will be given to +In order for a `Pipeline` to interact with the outside world, it will probably +need [`PipelineResources`](#creating-pipelineresources) which will be given to `Tasks` as inputs and outputs. Your `Pipeline` must declare the `PipelineResources` it needs in a `resources` @@ -124,13 +124,16 @@ specific contract. When containers are run in a `Task`, the `entrypoint` of the container will be overwritten with a custom binary. The plan is to use this custom binary for -controlling the execution of step containers ([#224](https://github.com/knative/build-pipeline/issues/224)) and log streaming +controlling the execution of step containers +([#224](https://github.com/knative/build-pipeline/issues/224)) and log streaming [#107](https://github.com/knative/build-pipeline/issues/107), though currently -it will write logs only to an [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) -(which cannot be read from after the pod has finished executing, so logs must be obtained +it will write logs only to an +[`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) +(which cannot be read from after the pod has finished executing, so logs must be +obtained [via k8s logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/), -using a tool such as [test/logs/README.md](../test/logs/README.md), -or setting up an external system to consume logs). +using a tool such as [test/logs/README.md](../test/logs/README.md), or setting +up an external system to consume logs). When `command` is not explicitly set, the controller will attempt to lookup the entrypoint from the remote registry. @@ -189,20 +192,22 @@ configure that by edit the `image`'s value in a configmap named ### Resource sharing between tasks Pipeline `Tasks` are allowed to pass resources from previous `Tasks` via the -[`from`](#from) field. This feature is implemented using the two -following alternatives: +[`from`](#from) field. This feature is implemented using the two following +alternatives: -- Persistent Volume Claims under the hood but however has an implication - that tasks cannot have any volume mounted under path `/pvc`. +- Persistent Volume Claims under the hood but however has an implication that + tasks cannot have any volume mounted under path `/pvc`. - [GCS storage bucket](https://cloud.google.com/storage/docs/json_api/v1/buckets) - A storage bucket can be configured using a ConfigMap named [`config-artifact-bucket`](./../config/config-artifact-bucket.yaml). - with the following attributes: + A storage bucket can be configured using a ConfigMap named + [`config-artifact-bucket`](./../config/config-artifact-bucket.yaml). with the + following attributes: - `location`: the address of the bucket (for example gs://mybucket) -- `bucket.service.account.secret.name`: the name of the secret that will contain the credentials for the service account - with access to the bucket -- `bucket.service.account.secret.key`: the key in the secret with the required service account json -The bucket is configured with a retention policy of 24 hours after which files will be deleted +- `bucket.service.account.secret.name`: the name of the secret that will contain + the credentials for the service account with access to the bucket +- `bucket.service.account.secret.key`: the key in the secret with the required + service account json The bucket is configured with a retention policy of 24 + hours after which files will be deleted ### Outputs @@ -226,8 +231,9 @@ steps: value: "world" ``` -**Note**: If the Task is relying on output resource functionality then the containers -in the Task `steps` field cannot mount anything in the path `/workspace/output`. +**Note**: If the Task is relying on output resource functionality then the +containers in the Task `steps` field cannot mount anything in the path +`/workspace/output`. If resource is declared in both input and output then input resource, then destination path of input resource is used instead of @@ -492,10 +498,9 @@ spec: ### Taskrun with embedded definitions Another way of running a Task is embedding the TaskSpec in the taskRun yaml. -This can be useful for "one-shot" style runs, or debugging. -TaskRun resource can include either Task reference or TaskSpec but not both. -Below is an example where `build-push-task-run-2` includes `TaskSpec` and no -reference to Task. +This can be useful for "one-shot" style runs, or debugging. TaskRun resource can +include either Task reference or TaskSpec but not both. Below is an example +where `build-push-task-run-2` includes `TaskSpec` and no reference to Task. ```yaml apiVersion: pipeline.knative.dev/v1alpha1 @@ -968,10 +973,10 @@ Params that can be added are the following: source directory. Eg: `gsutil cp source.tar gs://some-bucket.tar`. Private buckets can also be configured as storage resources. To access GCS -private buckets, service accounts are required with correct permissions. -The `secrets` field on the storage resource is used for configuring this -information. -Below is an example on how to create a storage resource with service account. +private buckets, service accounts are required with correct permissions. The +`secrets` field on the storage resource is used for configuring this +information. Below is an example on how to create a storage resource with +service account. 1. Refer to [official documentation](https://cloud.google.com/compute/docs/access/service-accounts) @@ -1011,9 +1016,9 @@ Below is an example on how to create a storage resource with service account. ## Timing Out PipelinesRun and TasksRuns -If you want to ensure that your `PipelineRun` or `TaskRun` will be stopped if it runs -past a certain duration, you can use the `Timeout` field on either `PipelineRun` -or `TaskRun`. In both cases, add the following to the `spec`: +If you want to ensure that your `PipelineRun` or `TaskRun` will be stopped if it +runs past a certain duration, you can use the `Timeout` field on either +`PipelineRun` or `TaskRun`. In both cases, add the following to the `spec`: ```yaml spec: diff --git a/examples/README.md b/examples/README.md index 7144e9eba49..6bd5702900e 100644 --- a/examples/README.md +++ b/examples/README.md @@ -57,9 +57,8 @@ The [Tasks](../docs/tasks.md) used by the simple examples are: #### Simple Runs -The [run](./run/) directory contains an example -[TaskRun](../docs/taskruns.md) and an example -[PipelineRun](../docs/pipelineruns.md): +The [run](./run/) directory contains an example [TaskRun](../docs/taskruns.md) +and an example [PipelineRun](../docs/pipelineruns.md): - [task-run.yaml](./run/task-run.yaml) shows an example of how to manually run the `build-push` task @@ -92,9 +91,8 @@ The two [Tasks](../docs/tasks.md) used by the output Pipeline are in `workspace` `git` `PipelineResource` These work together when combined in a `Pipeline` because the git resource used -as an [`Output`](../docs/tasks.md#outputs) of the `create-file` `Task` can be -an [`Input`](../docs/tasks.md#inputs) of the `check-stuff-file-exists` -`Task`. +as an [`Output`](../docs/tasks.md#outputs) of the `create-file` `Task` can be an +[`Input`](../docs/tasks.md#inputs) of the `check-stuff-file-exists` `Task`. #### Output Runs @@ -105,18 +103,15 @@ The [run](./run/) directory contains an example ### Accessing private docker image The [run](./run/) directory contains an example -[TaskRun](../docs/Concepts.md#taskrun) with an embedded taskSpec, that -pull a private image from `gcr.io`, see -[`run/private-taskrun.yaml`](./run/private-taskrun.yaml). +[TaskRun](../docs/Concepts.md#taskrun) with an embedded taskSpec, that pull a +private image from `gcr.io`, see +[`run/private-taskrun.yaml`](./run/private-taskrun.yaml). -This *run* requires the secrets from -[`0-secrets.yaml`](`0-secrets.yaml`) and service accounts from -[`1-bots.yaml`](`1-bots.yaml`) to be able to pull the private -image. +This _run_ requires the secrets from [`0-secrets.yaml`](`0-secrets.yaml`) and +service accounts from [`1-bots.yaml`](`1-bots.yaml`) to be able to pull the +private image. It uses `kubernetes.io/dockercfg` secret type but, -`kubernetes.io/dockerconfigjson` and [Knative flavored -credentials](https://github.com/knative/docs/blob/master/build/auth.md#guiding-credential-selection) +`kubernetes.io/dockerconfigjson` and +[Knative flavored credentials](https://github.com/knative/docs/blob/master/build/auth.md#guiding-credential-selection) are supported too. - - diff --git a/hack/release.md b/hack/release.md index a5c86c9c681..4c914836e57 100644 --- a/hack/release.md +++ b/hack/release.md @@ -53,7 +53,7 @@ _Note: only Knative admins can create versioned releases._ Creating and releasing a versioned release has two steps: 1. [Update the published docs](#update-the-published-docs) -2. [Cut the release](#cut-the-release) +2. [Cut the release](#cut-the-release) ### Update the published docs @@ -61,13 +61,14 @@ The official docs for the latest release of `build-pipelines` live in [the knative docs repo](https://github.com/knative/docs) at [`knative/docs/pipeline`](https://github.com/knative/docs/tree/master/pipeline). -These docs correspond to the most recent release of `build-pipeline`. There is -a living version of these docs in this repo, which correspond to the functionality -at `HEAD`. Part of creating a release involves copying the living version of these -files to `knative/docs`. +These docs correspond to the most recent release of `build-pipeline`. There is a +living version of these docs in this repo, which correspond to the functionality +at `HEAD`. Part of creating a release involves copying the living version of +these files to `knative/docs`. Specifically copy all of the docs in the first level `docs/` folder (i.e. not a -recursive copy) to [`knative/docs/pipeline`](https://github.com/knative/docs/tree/master/pipeline) +recursive copy) to +[`knative/docs/pipeline`](https://github.com/knative/docs/tree/master/pipeline) and open a PR for review there. ### Cut the release diff --git a/test/README.md b/test/README.md index 34e99db8496..f574f381950 100644 --- a/test/README.md +++ b/test/README.md @@ -107,17 +107,17 @@ pipelineRunsInformer.Informer().GetIndexer().Add(obj) Besides the environment variable `KO_DOCKER_REPO`, you may also need the permissions inside the TaskRun to run the Kaniko e2e test and GCS taskrun test. -- In Kaniko e2e test, setting `GCP_SERVICE_ACCOUNT_KEY_PATH` as the path of the GCP - service account JSON key which has permissions to push to the registry +- In Kaniko e2e test, setting `GCP_SERVICE_ACCOUNT_KEY_PATH` as the path of the + GCP service account JSON key which has permissions to push to the registry specified in `KO_DOCKER_REPO` will enable Kaniko to use those credentials when pushing an image. - In GCS taskrun test, GCP service account JSON key file at path `GCP_SERVICE_ACCOUNT_KEY_PATH` is used to generate Kubernetes secret to access GCS bucket. This e2e test requires valid service account configuration json but it does not require any role binding. -- In Storage artifact bucket, setting the `GCP_SERVICE_ACCOUNT_KEY_PATH` as the - path of the GCP service account JSON key which has permissions to create/delete - a bucket. +- In Storage artifact bucket, setting the `GCP_SERVICE_ACCOUNT_KEY_PATH` as the + path of the GCP service account JSON key which has permissions to + create/delete a bucket. To reduce e2e test setup developers can use the same environment variable for both Kaniko e2e test and GCS taskrun test. To create a service account usable in