Skip to content

Commit

Permalink
Format markdown
Browse files Browse the repository at this point in the history
Produced via: `prettier --write --prose-wrap=always $(find -name '*.md' | grep -v vendor | grep -v .github)`
  • Loading branch information
mattmoor-sockpuppet authored and knative-prow-robot committed Jan 20, 2019
1 parent b564bd0 commit 5626eeb
Show file tree
Hide file tree
Showing 5 changed files with 87 additions and 72 deletions.
65 changes: 33 additions & 32 deletions docs/Concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,9 @@ Below diagram lists the main custom resources created by Pipeline CRDs:

### Task

A `Task` is a collection of sequential steps you would want to run as part of your
continuous integration flow. A Task will run inside a container on your cluster.
A `Task` is a collection of sequential steps you would want to run as part of
your continuous integration flow. A Task will run inside a container on your
cluster.

A Task declares:

Expand All @@ -55,13 +56,13 @@ A Task declares:

#### Inputs

Inputs declare the inputs the Task needs. Every task input resource should provide name
and type (like git, image). It can also provide optionally `targetPath` to
initialize the resource in specific directory. If `targetPath` is set then the resource
will be initialized under `/workspace/targetPath`. If `targetPath` is not
specified then the resource will be initialized under`/workspace`. The following
example demonstrates how git input repository could be initialized in `GOPATH` to
run tests.
Inputs declare the inputs the Task needs. Every task input resource should
provide name and type (like git, image). It can also provide optionally
`targetPath` to initialize the resource in specific directory. If `targetPath`
is set then the resource will be initialized under `/workspace/targetPath`. If
`targetPath` is not specified then the resource will be initialized
under`/workspace`. The following example demonstrates how git input repository
could be initialized in `GOPATH` to run tests.

```yaml
apiVersion: pipeline.knative.dev/v1alpha1
Expand Down Expand Up @@ -123,23 +124,23 @@ Examples of `Task` definitions with inputs and outputs are [here](../examples)

##### Step Entrypoint

To get the logs out of a [`Task`](#task), Knative provides its own executable that wraps
the `command` and `args` values specified in the `steps`. This means that every
`Task` must use `command`, and cannot rely on the image's `entrypoint`.
To get the logs out of a [`Task`](#task), Knative provides its own executable
that wraps the `command` and `args` values specified in the `steps`. This means
that every `Task` must use `command`, and cannot rely on the image's
`entrypoint`.

##### Configure Entrypoint image

To run a step needs to pull an `Entrypoint` image. Knative provides a
way for you to configure the `Entrypoint` image in case it is hard to
pull in your environment. To do that you can edit the `image`'s value
in a configmap named
To run a step needs to pull an `Entrypoint` image. Knative provides a way for
you to configure the `Entrypoint` image in case it is hard to pull in your
environment. To do that you can edit the `image`'s value in a configmap named
[`config-entrypoint`](./../config/config-entrypoint.yaml).

#### ClusterTask

A `ClusterTask` is similar to `Task` but with a cluster-wide scope. Cluster Tasks are available in
all namespaces, typically used to conveniently provide commonly used tasks to
users.
A `ClusterTask` is similar to `Task` but with a cluster-wide scope. Cluster
Tasks are available in all namespaces, typically used to conveniently provide
commonly used tasks to users.

#### Pipeline

Expand Down Expand Up @@ -200,32 +201,32 @@ corresponding `Run` object:
##### TaskRun

Creating a `TaskRun` invokes a [Task](#task), running all of the steps until
completion or failure. Creating a `TaskRun` requires satisfying all of the
input requirements of the `Task`.
completion or failure. Creating a `TaskRun` requires satisfying all of the input
requirements of the `Task`.

`TaskRun` definition includes `inputs`, `outputs` for `Task` referred in spec.

Input resource includes name and reference to pipeline resource and optionally
`paths`. The `paths` are used by `TaskRun` as the resource's new source paths
i.e., copy the resource from specified list of paths. `TaskRun` expects the
folder and contents to be already present in specified paths. The `paths` feature
could be used to provide extra files or altered version of existing resource
before execution of steps.
folder and contents to be already present in specified paths. The `paths`
feature could be used to provide extra files or altered version of existing
resource before execution of steps.

Output resource includes name and reference to pipeline resource and optionally
`paths`. The `paths` are used by `TaskRun` as the resource's new destination
paths i.e., copy the resource entirely to specified paths. `TaskRun` will be
responsible for creating required directories and copying contents over. The `paths`
feature could be used to inspect the results of taskrun after execution of
steps.
responsible for creating required directories and copying contents over. The
`paths` feature could be used to inspect the results of taskrun after execution
of steps.

The `paths` feature for input and output resource is heavily used to pass same
version of resources across tasks in context of pipelinerun.

In the following example, task and taskrun are defined with input resource,
output resource and step which builds war artifact. After execution of
Taskrun (`volume-taskrun`), `custom` volume has the entire resource
`java-git-resource` (including the war artifact) copied to the destination path
output resource and step which builds war artifact. After execution of Taskrun
(`volume-taskrun`), `custom` volume has the entire resource `java-git-resource`
(including the war artifact) copied to the destination path
`/custom/workspace/`.

```yaml
Expand Down Expand Up @@ -281,8 +282,8 @@ spec:

##### PipelineRun

Creating a `PipelineRun` invokes the pipeline, creating [TaskRuns](#taskrun)
for each task in the pipeline.
Creating a `PipelineRun` invokes the pipeline, creating [TaskRuns](#taskrun) for
each task in the pipeline.

A `PipelineRun` ties together:

Expand Down
42 changes: 27 additions & 15 deletions docs/developers/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,34 +5,39 @@ the complexity.

### How are resources shared between tasks?

`PipelineRun` uses PVC to share resources between tasks. PVC volume is mounted on
path `/pvc` by PipelineRun.
`PipelineRun` uses PVC to share resources between tasks. PVC volume is mounted
on path `/pvc` by PipelineRun.

- If a resource in a task is declared as output then the `TaskRun` controller adds a step
to copy each output resource to the directory path `/pvc/task_name/resource_name`.
- If a resource in a task is declared as output then the `TaskRun` controller
adds a step to copy each output resource to the directory path
`/pvc/task_name/resource_name`.

- If an input resource includes `providedBy` condition then the `TaskRun` controller adds
a step to copy from PVC to directory path `/pvc/previous_task/resource_name`.
- If an input resource includes `providedBy` condition then the `TaskRun`
controller adds a step to copy from PVC to directory path
`/pvc/previous_task/resource_name`.

### How are inputs handled?

Input resources, like source code (git) or artifacts, are dumped at path
`/workspace/task_resource_name`. Resource definition in task can have custom target directory. If
`targetPath` is mentioned in task input then the controllers are responsible for adding
container definitions to create directories and also to fetch the versioned
artifacts into that directory.
`/workspace/task_resource_name`. Resource definition in task can have custom
target directory. If `targetPath` is mentioned in task input then the
controllers are responsible for adding container definitions to create
directories and also to fetch the versioned artifacts into that directory.

### How are outputs handled?

Output resources, like source code (git) or artifacts (storage resource), are
expected in directory path `/workspace/output/resource_name`.

- If resource has an output "action" like upload to blob storage, then the container
step is added for this action.
- If resource has an output "action" like upload to blob storage, then the
container step is added for this action.
- If there is PVC volume present (TaskRun holds owner reference to PipelineRun)
then copy step is added as well.

- If the resource is declared only in output but not in input for task then the copy step includes resource being copied to PVC to path `/pvc/task_name/resource_name` from `/workspace/output/resource_name` like the following example.
- If the resource is declared only in output but not in input for task then the
copy step includes resource being copied to PVC to path
`/pvc/task_name/resource_name` from `/workspace/output/resource_name` like the
following example.

```yaml
kind: Task
Expand All @@ -46,7 +51,11 @@ expected in directory path `/workspace/output/resource_name`.
type: storage
```
- If the resource is declared both in input and output for task the then copy step includes resource being copied to PVC to path `/pvc/task_name/resource_name` from `/workspace/random-space/` if input resource has custom target directory (`random-space`) declared like the following example.
- If the resource is declared both in input and output for task the then copy
step includes resource being copied to PVC to path
`/pvc/task_name/resource_name` from `/workspace/random-space/` if input
resource has custom target directory (`random-space`) declared like the
following example.

```yaml
kind: Task
Expand All @@ -65,7 +74,10 @@ expected in directory path `/workspace/output/resource_name`.
type: storage
```

- If resource is declared both in input and output for task without custom target directory then copy step includes resource being copied to PVC to path `/pvc/task_name/resource_name` from `/workspace/random-space/` like the following example.
- If resource is declared both in input and output for task without custom
target directory then copy step includes resource being copied to PVC to
path `/pvc/task_name/resource_name` from `/workspace/random-space/` like the
following example.

```yaml
kind: Task
Expand Down
8 changes: 4 additions & 4 deletions docs/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -214,8 +214,8 @@ spec:
name: skaffold-image-leeroy-web
```

To apply the yaml files use the following command, you need to apply the
two resources, the task and taskrun.
To apply the yaml files use the following command, you need to apply the two
resources, the task and taskrun.

```bash
kubectl apply -f <name-of-file.yaml>
Expand Down Expand Up @@ -357,8 +357,8 @@ spec:
value: "spec.template.spec.containers[0].image"
```

The above Pipeline is referencing a task to `deploy-using-kubectl` which can be found
here:
The above Pipeline is referencing a task to `deploy-using-kubectl` which can be
found here:

```yaml
apiVersion: pipeline.knative.dev/v1alpha1
Expand Down
4 changes: 2 additions & 2 deletions docs/using.md
Original file line number Diff line number Diff line change
Expand Up @@ -192,8 +192,8 @@ steps:
value: "world"
```

**Note**: If the Task is relying on output resource functionality then they cannot
mount anything in file path `/workspace/output`.
**Note**: If the Task is relying on output resource functionality then they
cannot mount anything in file path `/workspace/output`.

If resource is declared in both input and output then input resource, then
destination path of input resource is used instead of
Expand Down
40 changes: 21 additions & 19 deletions test/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@ which need `-tags=e2e` to be enabled.

### Unit testing Controllers

Kubernetes [client-go](https://godoc.org/k8s.io/client-go) provides a number of fake clients and objects for unit
testing. The ones we are using are:
Kubernetes [client-go](https://godoc.org/k8s.io/client-go) provides a number of
fake clients and objects for unit testing. The ones we are using are:

1. [Fake Kubernetes client](https://godoc.org/k8s.io/client-go/kubernetes/fake):
Provides a fake REST interface to interact with Kubernetes API
Expand Down Expand Up @@ -57,10 +57,10 @@ obj := &v1alpha1.PipelineRun {
ObjectMeta: metav1.ObjectMeta {
Name: "name",
Namespace: "namespace",
},
},
Spec: v1alpha1.PipelineRunSpec {
PipelineRef: v1alpha1.PipelineRef {
Name: "test-pipeline",
Name: "test-pipeline",
APIVersion: "a1",
},
}
Expand All @@ -74,10 +74,11 @@ if action.GetVerb() != "list" {
}
```
To test the Controller of _CRD (CustomResourceDefinitions)_, you need to add the CRD to
the [informers](./../pkg/client/informers) so that the [listers](./../pkg/client/listers) can get the access.
To test the Controller of _CRD (CustomResourceDefinitions)_, you need to add the
CRD to the [informers](./../pkg/client/informers) so that the
[listers](./../pkg/client/listers) can get the access.
For example, the following code will test `PipelineRun`
For example, the following code will test `PipelineRun`
```go
pipelineClient := fakepipelineclientset.NewSimpleClientset()
Expand All @@ -86,12 +87,12 @@ pipelineRunsInformer := sharedInfomer.Pipeline().V1alpha1().PipelineRuns()
obj := &v1alpha1.PipelineRun {
ObjectMeta: metav1.ObjectMeta {
Name: "name",
Name: "name",
Namespace: "namespace",
},
},
Spec: v1alpha1.PipelineRunSpec {
PipelineRef: v1alpha1.PipelineRef {
Name: "test-pipeline",
Name: "test-pipeline",
APIVersion: "a1",
},
}
Expand All @@ -103,10 +104,11 @@ pipelineRunsInformer.Informer().GetIndexer().Add(obj)
### Setup
Besides the environment variable `KO_DOCKER_REPO`, you may also need the permissions
inside the Build to run the Kaniko e2e test. If so, setting `KANIKO_SECRET_CONFIG_FILE`
as the path of the GCP service account JSON key which has permissions to push to the
registry specified in `KO_DOCKER_REPO` will enable Kaniko to use those credentials when pushing.
Besides the environment variable `KO_DOCKER_REPO`, you may also need the
permissions inside the Build to run the Kaniko e2e test. If so, setting
`KANIKO_SECRET_CONFIG_FILE` as the path of the GCP service account JSON key
which has permissions to push to the registry specified in `KO_DOCKER_REPO` will
enable Kaniko to use those credentials when pushing.
To create a service account usable in the e2e tests:
Expand Down Expand Up @@ -296,13 +298,13 @@ wait for the system to realize those changes. You can use polling methods to
check the resources reach the desired state.
The `WaitFor*` functions use the Kubernetes
[`wait` package](https://godoc.org/k8s.io/apimachinery/pkg/util/wait). For polling
they use
[`wait` package](https://godoc.org/k8s.io/apimachinery/pkg/util/wait). For
polling they use
[`PollImmediate`](https://godoc.org/k8s.io/apimachinery/pkg/util/wait#PollImmediate)
behind the scene. And the callback function is
behind the scene. And the callback function is
[`ConditionFunc`](https://godoc.org/k8s.io/apimachinery/pkg/util/wait#ConditionFunc),
which returns a `bool` to indicate if the function should stop, and an
`error` to indicate if there was an error.
which returns a `bool` to indicate if the function should stop, and an `error`
to indicate if there was an error.
For example, you can poll a `TaskRun` until having a `Status.Condition`:
Expand Down

0 comments on commit 5626eeb

Please sign in to comment.