Skip to content

Commit

Permalink
Merge branch 'master' into export-methods-cleanup
Browse files Browse the repository at this point in the history
  • Loading branch information
vigneshashokan authored Oct 3, 2020
2 parents f4975b1 + cfe2fe0 commit f2de083
Show file tree
Hide file tree
Showing 121 changed files with 5,146 additions and 3,050 deletions.
18 changes: 9 additions & 9 deletions .github/pull_request_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,19 +39,19 @@ Examples of user facing changes:
For pull requests with a release note:
```release-note
Your release note here
```
```release-note
Your release note here
```
For pull requests that require additional action from users switching to the new release, include the string "action required" (case insensitive) in the release note:
```release-note
action required: your release note here
```
```release-note
action required: your release note here
```
For pull requests that don't need to be mentioned at release time, use the `/release-note-none` Prow command to add the `release-note-none` label to the PR. You can also write the string "NONE" as a release note in your PR description:
```release-note
NONE
```
```release-note
NONE
```
-->
4 changes: 1 addition & 3 deletions .ko.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,9 @@ baseImageOverrides:
# They are produced from ./images/Dockerfile
github.com/tektoncd/pipeline/cmd/creds-init: gcr.io/tekton-nightly/github.com/tektoncd/pipeline/build-base:latest
github.com/tektoncd/pipeline/cmd/git-init: gcr.io/tekton-nightly/github.com/tektoncd/pipeline/build-base:latest

# GCS fetcher needs root due to workspace permissions
github.com/tektoncd/pipeline/vendor/github.com/GoogleCloudPlatform/cloud-builders/gcs-fetcher/cmd/gcs-fetcher: gcr.io/distroless/static:latest
# PullRequest resource needs root because in output mode it needs to access pr.json
# which might have been copied or written with any level of permissions.
github.com/tektoncd/pipeline/cmd/pullrequest-init: gcr.io/distroless/static:latest

# Our entrypoint image does not need root, it simply needs to be able to 'cp' the binary into a shared location.
github.com/tektoncd/pipeline/cmd/entrypoint: gcr.io/distroless/base:debug-nonroot
160 changes: 81 additions & 79 deletions DEVELOPMENT.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,8 @@
1. Create [a GitHub account](https://github.com/join)
1. Setup
[GitHub access via SSH](https://help.github.com/articles/connecting-to-github-with-ssh/)
1. Set up your [development environment](#environment-setup)
1. [Create and checkout a repo fork](#checkout-your-fork)
1. Set up your [shell environment](#environment-setup)
1. Install [requirements](#requirements)
1. [Set up a Kubernetes cluster](#kubernetes-cluster)
1. [Configure kubectl to use your cluster](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
1. [Set up a docker repository you can push to](https://github.com/knative/serving/blob/4a8c859741a4454bdd62c2b60069b7d05f5468e7/docs/setting-up-a-docker-registry.md)
Expand Down Expand Up @@ -45,6 +44,86 @@ At this point, you may find it useful to return to these `Tekton Pipeline` docs:
- [Tekton Pipeline "Hello World" tutorial](https://github.com/tektoncd/pipeline/blob/master/docs/tutorial.md) -
Define `Tasks`, `Pipelines`, and `PipelineResources`, see what happens when
they are run

## Environment Setup

You must install these tools:

1. [`git`](https://help.github.com/articles/set-up-git/): For source control

1. [`go`](https://golang.org/doc/install): The language Tekton Pipelines is
built in. You need go version [v1.15](https://golang.org/dl/) or higher.

Your [`$GOPATH`] setting is critical for `ko apply` to function properly: a
successful run will typically involve building pushing images instead of only
configuring Kubernetes resources.

To [run your controllers with `ko`](#install-pipeline) you'll need to set these
environment variables (we recommend adding them to your `.bashrc`):

1. `GOPATH`: If you don't have one, simply pick a directory and add `export
GOPATH=...`
1. `$GOPATH/bin` on `PATH`: This is so that tooling installed via `go get` will
work properly.
1. `KO_DOCKER_REPO`: The docker repository to which developer images should be
pushed (e.g. `gcr.io/[gcloud-project]`). You can also
[run a local registry](https://docs.docker.com/registry/deploying/) and set
`KO_DOCKER_REPO` to reference the registry (e.g. at
`localhost:5000/mypipelineimages`).

`.bashrc` example:

```shell
export GOPATH="$HOME/go"
export PATH="${PATH}:${GOPATH}/bin"
export KO_DOCKER_REPO='gcr.io/my-gcloud-project-name'
```

Make sure to configure
[authentication](https://cloud.google.com/container-registry/docs/advanced-authentication#standalone_docker_credential_helper)
for your `KO_DOCKER_REPO` if required. To be able to push images to
`gcr.io/<project>`, you need to run this once:

```shell
gcloud auth configure-docker
```

After setting `GOPATH` and putting `$GOPATH/bin` on your `PATH`, you must then install these tools:

3. [`ko`](https://github.com/google/ko): For development. `ko` version v0.5.1 or
higher is required for `pipeline` to work correctly.

4. [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/): For
interacting with your kube cluster

The user you are using to interact with your k8s cluster must be a cluster admin
to create role bindings:

```shell
# Using gcloud to get your current user
USER=$(gcloud config get-value core/account)
# Make that user a cluster admin
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user="${USER}"
```

### Install in custom namespace

1. To install into a different namespace you can use this script :

```shell
#!/usr/bin/env bash
set -e

# Set your target namespace here
TARGET_NAMESPACE=new-target-namespace

ko resolve -f config | sed -e '/kind: Namespace/!b;n;n;s/:.*/: '"${TARGET_NAMESPACE}"'/' | \
sed "s/namespace: tekton-pipelines$/namespace: ${TARGET_NAMESPACE}/" | \
kubectl apply -f-
kubectl set env deployments --all SYSTEM_NAMESPACE=${TARGET_NAMESPACE} -n ${TARGET_NAMESPACE}
```

### Checkout your fork

Expand All @@ -70,22 +149,6 @@ git remote set-url --push upstream no_push
_Adding the `upstream` remote sets you up nicely for regularly
[syncing your fork](https://help.github.com/articles/syncing-a-fork/)._

### Requirements

You must install these tools:

1. [`go`](https://golang.org/doc/install): The language Tekton Pipelines is
built in
1. [`git`](https://help.github.com/articles/set-up-git/): For source control
1. [`ko`](https://github.com/google/ko): For development. `ko` version v0.5.1 or
higher is required for `pipeline` to work correctly.
1. [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/): For
interacting with your kube cluster

Your [`$GOPATH`] setting is critical for `ko apply` to function properly: a
successful run will typically involve building pushing images instead of only
configuring Kubernetes resources.

## Kubernetes cluster

The recommended configuration is:
Expand Down Expand Up @@ -170,67 +233,6 @@ To enable the Kubernetes that comes with Docker Desktop:
--user=$(gcloud config get-value core/account)
```
## Environment Setup
To [run your controllers with `ko`](#install-pipeline) you'll need to set these
environment variables (we recommend adding them to your `.bashrc`):

1. `GOPATH`: If you don't have one, simply pick a directory and add `export
GOPATH=...`
1. `$GOPATH/bin` on `PATH`: This is so that tooling installed via `go get` will
work properly.
1. `KO_DOCKER_REPO`: The docker repository to which developer images should be
pushed (e.g. `gcr.io/[gcloud-project]`). You can also
[run a local registry](https://docs.docker.com/registry/deploying/) and set
`KO_DOCKER_REPO` to reference the registry (e.g. at
`localhost:5000/mypipelineimages`).
`.bashrc` example:
```shell
export GOPATH="$HOME/go"
export PATH="${PATH}:${GOPATH}/bin"
export KO_DOCKER_REPO='gcr.io/my-gcloud-project-name'
```
Make sure to configure
[authentication](https://cloud.google.com/container-registry/docs/advanced-authentication#standalone_docker_credential_helper)
for your `KO_DOCKER_REPO` if required. To be able to push images to
`gcr.io/<project>`, you need to run this once:
```shell
gcloud auth configure-docker
```
The user you are using to interact with your k8s cluster must be a cluster admin
to create role bindings:
```shell
# Using gcloud to get your current user
USER=$(gcloud config get-value core/account)
# Make that user a cluster admin
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user="${USER}"
```
### Install in custom namespace
1. To install into a different namespace you can use this script :
```shell
#!/usr/bin/env bash
set -e
# Set your target namespace here
TARGET_NAMESPACE=new-target-namespace
ko resolve -f config | sed -e '/kind: Namespace/!b;n;n;s/:.*/: '"${TARGET_NAMESPACE}"'/' | \
sed "s/namespace: tekton-pipelines$/namespace: ${TARGET_NAMESPACE}/" | \
kubectl apply -f-
kubectl set env deployments --all SYSTEM_NAMESPACE=${TARGET_NAMESPACE} -n ${TARGET_NAMESPACE}
```
## Iterating
While iterating on the project, you may need to:
Expand Down
21 changes: 20 additions & 1 deletion cmd/controller/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ import (
"github.com/tektoncd/pipeline/pkg/reconciler/taskrun"
"github.com/tektoncd/pipeline/pkg/version"
corev1 "k8s.io/api/core/v1"
"k8s.io/client-go/rest"
"knative.dev/pkg/controller"
"knative.dev/pkg/injection"
"knative.dev/pkg/injection/sharedmain"
"knative.dev/pkg/signals"
Expand All @@ -48,6 +50,12 @@ var (
imageDigestExporterImage = flag.String("imagedigest-exporter-image", "", "The container image containing our image digest exporter binary.")
namespace = flag.String("namespace", corev1.NamespaceAll, "Namespace to restrict informer to. Optional, defaults to all namespaces.")
versionGiven = flag.String("version", "devel", "Version of Tekton running")
qps = flag.Int("kube-api-qps", int(rest.DefaultQPS), "Maximum QPS to the master from this client")
burst = flag.Int("kube-api-burst", rest.DefaultBurst, "Maximum burst for throttle")
threadsPerController = flag.Int("threads-per-controller", controller.DefaultThreadsPerController, "Threads (goroutines) to create per controller")
disableHighAvailability = flag.Bool("disable-ha", false, "Whether to disable high-availability functionality for this component. This flag will be deprecated "+
"and removed when we have promoted this feature to stable, so do not pass it without filing an "+
"issue upstream!")
)

func main() {
Expand All @@ -68,7 +76,18 @@ func main() {
if err := images.Validate(); err != nil {
log.Fatal(err)
}
sharedmain.MainWithContext(injection.WithNamespaceScope(signals.NewContext(), *namespace), ControllerLogKey,
controller.DefaultThreadsPerController = *threadsPerController

cfg := sharedmain.ParseAndGetConfigOrDie()
// multiply by 2, no of controllers being created
cfg.QPS = 2 * float32(*qps)
cfg.Burst = 2 * *burst

ctx := injection.WithNamespaceScope(signals.NewContext(), *namespace)
if !*disableHighAvailability {
ctx = sharedmain.WithHADisabled(ctx)
}
sharedmain.MainWithConfig(ctx, ControllerLogKey, cfg,
taskrun.NewController(*namespace, images),
pipelinerun.NewController(*namespace, images),
)
Expand Down
107 changes: 94 additions & 13 deletions cmd/entrypoint/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,14 @@
# entrypoint

This binary is used to override the entrypoint of a container by
wrapping it. In `tektoncd/pipeline` this is used to make sure `Task`'s
steps are executed in order, or for sidecars.
wrapping it and executing original entrypoint command in a subprocess.

The following flags are available :
Tekton uses this to make sure `TaskRun`s' steps are executed in order, only
after sidecars are ready and previous steps have completed successfully.

## Flags

The following flags are available:

- `-entrypoint`: "original" command to be executed (as
entrypoint). This will be executed as a sub-process on `entrypoint`
Expand All @@ -16,20 +20,97 @@ The following flags are available :
will either execute the sub-process (in case of `{{wait_file}}`) or
skip the execution, write to `{{post_file}}.err` and return an error
(`exitCode` >= 0)
- `-wait_file_content`: excepts the `wait_file` to add actual
content. It will continue watching for `wait_file` until it has
- `-wait_file_content`: expects the `wait_file` to contain actual
contents. It will continue watching for `wait_file` until it has
content.

Any extra positional arguments are passed to the original entrypoint command.

## Example

The following example of usage for `entrypoint` waits for
`/tekton/downward/ready` file to exist and have some content before
executing `/ko-app/bash -- -args mkdir -p /workspace/git-resource`,
and will write to `/tekton/tools/0` in case of success, or
`/tekton/tools/0.err` in case of failure.
`/tekton/tools/3` file to exist and executes the command `bash` with args
`echo` and `hello`, then writes the file `/tekton/tools/4`, or
`/tekton/tools/4.err` in case the command fails.

```shell
entrypoint \
-wait_file /tekton/downward/ready \
-post_file /tekton/tools/0" \
-wait_file_content \
-entrypoint /ko-app/bash -- -args mkdir -p /workspace/git-resource
-wait_file /tekton/tools/3 \
-post_file /tekton/tools/4 \
-entrypoint bash -- \
echo hello
```

## Waiting for Sidecars

In cases where the TaskRun's Pod has sidecar containers -- including, possibly,
injected sidecars that Tekton itself didn't specify -- the first step should
also wait until all those sidecars have reported as ready. Starting before
sidecars are ready could lead to flaky errors if steps rely on the sidecar
being ready to succeed.

To account for this, the Tekton controller starts TaskRun Pods with the first
step's entrypoint binary configured to wait for a special file provided by the
[Kubernetes Downward
API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api).
This allows Tekton to write a Pod annotation when all sidecars report as ready,
and for the value of that annotation to appear to the Pod as a file in a
Volume. To the Pod, that file always exists, but without content until the
annotation is set, so we instruct the entrypoint to wait for the `-wait_file`
to contain contents before proceeding.

### Example

The following example of usage for `entrypoint` waits for
`/tekton/downward/ready` file to exist and contain actual contents
(`-wait_file_contents`), and executes the command `bash` with args
`echo` and `hello`, then writes the file `/tekton/tools/1`, or
`/tekton/tools/1.err` in case the command fails.

```shell
entrypoint \
-wait_file /tekton/downward/ready \
-wait_file_contents \
-post_file /tekton/tools/1 \
-entrypoint bash -- \
echo hello
```

## `cp` Mode

In order to make the `entrypoint` binary available to the user's steps, it gets
copied to a Volume that's shared with all the steps' containers. This is done
in an `initContainer` pre-step, that runs before steps start.

To reduce external dependencies, the `entrypoint` binary actually copies
_itself_ to the shared Volume. When executed with the positional args of `cp
<src> <dst>`, the `entrypoint` binary copies the `<src>` file to `<dst>` and
exits.

It's executed as an `initContainer` in the TaskRun's Pod like:

```
initContainers:
- image: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint
args:
- cp
- /ko-app/entrypoint # <-- path to the entrypoint binary inside the image
- /tekton/tools/entrypoint
volumeMounts:
- name: tekton-internal-tools
mountPath: /tekton/tools
containers:
- image: user-image
command:
- /tekton/tools/entrypoint
... args to entrypoint ...
volumeMounts:
- name: tekton-internal-tools
mountPath: /tekton/tools
volumes:
- name: tekton-internal-tools
volumeSource:
emptyDir: {}
```
Loading

0 comments on commit f2de083

Please sign in to comment.