Skip to content

Commit

Permalink
Format markdown
Browse files Browse the repository at this point in the history
Produced via: `prettier --write --prose-wrap=always $(find -name '*.md' | grep -v vendor | grep -v .github)`
  • Loading branch information
mattmoor-sockpuppet authored and knative-prow-robot committed Mar 7, 2019
1 parent 6084834 commit 9e46a51
Show file tree
Hide file tree
Showing 9 changed files with 64 additions and 56 deletions.
13 changes: 7 additions & 6 deletions docs/container-contract.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,18 @@ specific contract.

When containers are run in a `Task`, the `entrypoint` of the container will be
overwritten with a custom binary that ensures the containers within the `Task`
pod are executed in the specified order.
As such, it is always recommended to explicitly specify a command.
pod are executed in the specified order. As such, it is always recommended to
explicitly specify a command.

When `command` is not explicitly set, the controller will attempt to lookup the
entrypoint from the remote registry. If the image is a private registry, the
service account should include an [ImagePullSecret](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account).
service account should include an
[ImagePullSecret](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account).
The build-pipeline controller will use the `ImagePullSecret` of the service
account, and if service account is empty, `default` is assumed. Next is falling
back to docker config added in a `.docker/config.json` at `$HOME/.docker/config.json`.
If none of these credentials are available the controller will try to lookup
the image anonymously.
back to docker config added in a `.docker/config.json` at
`$HOME/.docker/config.json`. If none of these credentials are available the
controller will try to lookup the image anonymously.

For example, in the following Task with the images,
`gcr.io/cloud-builders/gcloud` and `gcr.io/cloud-builders/docker`, the
Expand Down
8 changes: 4 additions & 4 deletions docs/developers/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,9 +114,9 @@ expected in directory path `/workspace/output/resource_name`.

## Entrypoint rewriting and step ordering

`Entrypoint` is injected into the `Task` Container(s), wraps the `Task` step
to manage the execution order of the containers. The `entrypoint` binary has
the following arguments:
`Entrypoint` is injected into the `Task` Container(s), wraps the `Task` step to
manage the execution order of the containers. The `entrypoint` binary has the
following arguments:

- `wait_file` - If specified, file to wait for
- `post_file` - If specified, file to write upon completion
Expand All @@ -127,4 +127,4 @@ is changed to the entrypoint binary with the mentioned arguments and a volume
with the binary and file(s) is mounted.

If the image is a private registry, the service account should include an
[ImagePullSecret](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)
[ImagePullSecret](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)
2 changes: 1 addition & 1 deletion docs/pipelineruns.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ following fields:
<https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature>

[kubernetes-overview]:
<https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields>
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields

### Resources

Expand Down
61 changes: 33 additions & 28 deletions docs/pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,15 @@ following fields:
using in its [Tasks](#pipeline-tasks)
- `tasks`
- `resources.inputs` / `resource.outputs`
- [`from`](#from) - Used when the content of the [`PipelineResource`](resources.md)
should come from the [output](tasks.md#output) of a previous [Pipeline Task](#pipeline-tasks)
- [`runAfter`](#runAfter) - Used when the [Pipeline Task](#pipeline-task) should be executed
after another Pipeline Task, but there is no [output linking](#from) required
- [`from`](#from) - Used when the content of the
[`PipelineResource`](resources.md) should come from the
[output](tasks.md#output) of a previous [Pipeline Task](#pipeline-tasks)
- [`runAfter`](#runAfter) - Used when the [Pipeline Task](#pipeline-task)
should be executed after another Pipeline Task, but there is no
[output linking](#from) required

[kubernetes-overview]:
<https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields>
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields

### Declared resources

Expand Down Expand Up @@ -127,9 +129,9 @@ spec:

### Pipeline Tasks

A `Pipeline` will execute a graph of [`Tasks`](tasks.md) (see [ordering](#ordering)
for how to express this graph). At a minimum, this declaration must include a
reference to the [`Task`](tasks.md):
A `Pipeline` will execute a graph of [`Tasks`](tasks.md) (see
[ordering](#ordering) for how to express this graph). At a minimum, this
declaration must include a reference to the [`Task`](tasks.md):

```yaml
tasks:
Expand Down Expand Up @@ -174,7 +176,8 @@ spec:
#### from

Sometimes you will have [Pipeline Tasks](#pipeline-tasks) that need to take as
input the output of a previous `Task`, for example, an image built by a previous `Task`.
input the output of a previous `Task`, for example, an image built by a previous
`Task`.

Express this dependency by adding `from` on [`PipelineResources`](resources.md)
that your `Tasks` need.
Expand All @@ -188,7 +191,7 @@ that your `Tasks` need.
[Pipeline Task](#pipeline-task) which provides the `PipelineResource` must run
_before_ the Pipeline Task which needs that `PipelineResource` as an input
- The name of the `PipelineResource` must correspond to a `PipelineResource`
from the `Task` that the referenced `PipelineTask` gives as an output
from the `Task` that the referenced `PipelineTask` gives as an output

For example see this `Pipeline` spec:

Expand Down Expand Up @@ -219,11 +222,11 @@ regardless of the order they appear in the spec.

#### runAfter

Sometimes you will need to have [Pipeline Tasks](#pipeline-tasks) that need to run in
a certain order, but they do not have an explicit [output](tasks.md#outputs) to
[input](tasks.md#inputs) dependency (which is expressed via [`from`](#from)). In this case
you can use `runAfter` to indicate that a Pipeline Task should be run after one or more
previous Pipeline Tasks.
Sometimes you will need to have [Pipeline Tasks](#pipeline-tasks) that need to
run in a certain order, but they do not have an explicit
[output](tasks.md#outputs) to [input](tasks.md#inputs) dependency (which is
expressed via [`from`](#from)). In this case you can use `runAfter` to indicate
that a Pipeline Task should be run after one or more previous Pipeline Tasks.

For example see this `Pipeline` spec:

Expand All @@ -244,21 +247,23 @@ For example see this `Pipeline` spec:
- name: my-repo
```

In this `Pipeline`, we want to test the code before we build from it, but there is no output
from `test-app`, so `build-app` uses `runAfter` to indicate that `test-app` should run before
it, regardless of the order they appear in the spec.
In this `Pipeline`, we want to test the code before we build from it, but there
is no output from `test-app`, so `build-app` uses `runAfter` to indicate that
`test-app` should run before it, regardless of the order they appear in the
spec.

## Ordering

The [Pipeline Tasks](#pipeline-tasks) in a `Pipeline` can be connected and run in a graph,
specifically a *Directed Acyclic Graph* or DAG. Each of the Pipeline Tasks is a node, which
can be connected (i.e. a *Graph*) such that one will run before another (i.e. *Directed*),
and the execution will eventually complete (i.e. *Acyclic*, it will not get caught in infinite
loops).
The [Pipeline Tasks](#pipeline-tasks) in a `Pipeline` can be connected and run
in a graph, specifically a _Directed Acyclic Graph_ or DAG. Each of the Pipeline
Tasks is a node, which can be connected (i.e. a _Graph_) such that one will run
before another (i.e. _Directed_), and the execution will eventually complete
(i.e. _Acyclic_, it will not get caught in infinite loops).

This is done using:

- [`from`](#from) clauses on the [`PipelineResources`](#resources) needed by a `Task`
- [`from`](#from) clauses on the [`PipelineResources`](#resources) needed by a
`Task`
- [`runAfter`](#runAfter) clauses on the [Pipeline Tasks](#pipeline-tasks)

For example see this `Pipeline` spec:
Expand Down Expand Up @@ -325,14 +330,14 @@ build-app build-frontend
deploy-all
```

1. The `lint-repo` and `test-app` Pipeline Tasks will begin executing simultaneously.
(They have no `from` or `runAfter` clauses.)
1. The `lint-repo` and `test-app` Pipeline Tasks will begin executing
simultaneously. (They have no `from` or `runAfter` clauses.)
1. Once `test-app` completes, both `build-app` and `build-frontend` will begin
executing simultaneously (both `runAfter` `test-app`).
1. When both `build-app` and `build-frontend` have completed, `deploy-all` will
execute (it requires `PipelineResources` from both Pipeline Tasks).
1. The entire `Pipeline` will be finished executing after `lint-repo` and `deploy-all`
have completed.
1. The entire `Pipeline` will be finished executing after `lint-repo` and
`deploy-all` have completed.

## Examples

Expand Down
2 changes: 1 addition & 1 deletion docs/resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ following fields:
`PipelineResource`

[kubernetes-overview]:
<https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields>
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields

## Resource Types

Expand Down
2 changes: 1 addition & 1 deletion docs/taskruns.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ following fields:
<https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature>

[kubernetes-overview]:
<https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields>
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields

### Specifying a task

Expand Down
2 changes: 1 addition & 1 deletion docs/tasks.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ following fields:
available to your build.

[kubernetes-overview]:
<https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields>
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields

The following example is a non-working sample where most of the possible
configuration fields are used:
Expand Down
26 changes: 14 additions & 12 deletions docs/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -324,12 +324,11 @@ resource definition.

## Pipeline

A [`Pipeline`](pipelines.md) defines a list of tasks to execute in
order, while also indicating if any outputs should be used as inputs
of a following task by using [the `from` field](pipelines.md#from) and
also indicating [the order of executing (using the `runAfter` and
`from` fields)](pipelines.md#ordering). The same templating you used
in tasks is also available in pipeline.
A [`Pipeline`](pipelines.md) defines a list of tasks to execute in order, while
also indicating if any outputs should be used as inputs of a following task by
using [the `from` field](pipelines.md#from) and also indicating
[the order of executing (using the `runAfter` and `from` fields)](pipelines.md#ordering).
The same templating you used in tasks is also available in pipeline.

For example:

Expand Down Expand Up @@ -605,13 +604,16 @@ annotation applies to subjects such as Docker registries, log output locations
and other nuances that may be specific to particular cloud providers or
services.

The `TaskRuns` have been created in the following [order](pipelines.md#ordering):
The `TaskRuns` have been created in the following
[order](pipelines.md#ordering):

1. `tutorial-pipeline-run-1-build-skaffold-web` - This runs the [Pipeline Task](pipelines.md#pipeline-tasks)
`build-skaffold-web` first, because it has no [`from` or `runAfter` clauses](pipelines.md#ordering)
1. `tutorial-pipeline-run-1-deploy-web` - This runs `deploy-web` second, because its [input](tasks.md#inputs)
`web-image` comes [`from`](pipelines.md#from) `build-skaffold-web` (therefore `build-skaffold-web`
must run before `deploy-web`).
1. `tutorial-pipeline-run-1-build-skaffold-web` - This runs the
[Pipeline Task](pipelines.md#pipeline-tasks) `build-skaffold-web` first,
because it has no [`from` or `runAfter` clauses](pipelines.md#ordering)
1. `tutorial-pipeline-run-1-deploy-web` - This runs `deploy-web` second, because
its [input](tasks.md#inputs) `web-image` comes [`from`](pipelines.md#from)
`build-skaffold-web` (therefore `build-skaffold-web` must run before
`deploy-web`).

---

Expand Down
4 changes: 2 additions & 2 deletions test/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,8 +170,8 @@ You can also use
test as well as from k8s libraries.
- Using `-count=1` is
[the idiomatic way to disable test caching](https://golang.org/doc/go1.10#test).
- The end to end tests take a long time to run so a value like `-timeout=20m` can
be useful depending on what you're running
- The end to end tests take a long time to run so a value like `-timeout=20m`
can be useful depending on what you're running
You can [use test flags](#flags) to control the environment your tests run
against, i.e. override
Expand Down

0 comments on commit 9e46a51

Please sign in to comment.