Skip to content

Commit

Permalink
Fix markdownlint on all *.md files 🖇
Browse files Browse the repository at this point in the history
- Add a local configuration to ignore more rules than `test-infra`
- Fix rules that are not ignored
- Fix some links too

Signed-off-by: Vincent Demeester <vdemeest@redhat.com>
  • Loading branch information
vdemeester authored and knative-prow-robot committed Feb 28, 2019
1 parent 19dbd0d commit f0c1f80
Show file tree
Hide file tree
Showing 14 changed files with 60 additions and 49 deletions.
11 changes: 3 additions & 8 deletions .github/pull_request_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,9 @@ your descriptive commit message(s)! -->
These are the criteria that every PR should meet, please check them off as you
review them:

- [ ] Includes
[tests](https://github.com/knative/build-pipeline/blob/master/CONTRIBUTING.md#principles)
(if functionality changed/added)
- [ ] Includes
[docs](https://github.com/knative/build-pipeline/blob/master/CONTRIBUTING.md#principles)
(if user facing)
- [ ] Commit messages follow [commit message best
practices](https://github.com/knative/build-pipeline/blob/master/CONTRIBUTING.md#commit-messages)
- [ ] Includes [tests](https://github.com/knative/build-pipeline/blob/master/CONTRIBUTING.md#principles) (if functionality changed/added)
- [ ] Includes [docs](https://github.com/knative/build-pipeline/blob/master/CONTRIBUTING.md#principles) (if user facing)
- [ ] Commit messages follow [commit message best practices](https://github.com/knative/build-pipeline/blob/master/CONTRIBUTING.md#commit-messages)

_See [the contribution guide](https://github.com/knative/build-pipeline/blob/master/CONTRIBUTING.md)
for more details._
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Tekton Pipelines are **Typed**:
[kaniko](https://github.com/GoogleContainerTools/kaniko) v.s.
[buildkit](https://github.com/moby/buildkit))

## Want to start using Pipelines?
## Want to start using Pipelines

- [Installing Knative Pipelines](docs/install.md)
- Jump in with [the tutorial!](docs/tutorial.md)
Expand All @@ -34,7 +34,7 @@ Tekton Pipelines are **Typed**:
_See [our API compatibility policy](api_compatibility_policy.md) for info on the
stability level of the API._

## Want to contribute?
## Want to contribute

We are so excited to have you!

Expand Down
24 changes: 12 additions & 12 deletions api_compatibility_policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,15 @@ therefore likely to change.

For these purposes the CRDs are divided into three groups:

- [`Build` and `BuildTemplate`] - from https://github.com/knative/build
- [`Build` and `BuildTemplate`] - from <https://github.com/knative/build>
- [`TaskRun`, `Task`, and `ClusterTask`] - "more stable"
- [`PipelineRun`, `Pipeline` and `PipelineResource`] - "less stable"

The use of `alpha`, `beta` and `GA` in this document is meant to correspond
roughly to
[the kubernetes API deprecation policies](https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-a-flag-or-cli).

## What does compatibility mean here?
## What does compatibility mean here

This policy is about changes to the APIs of the
[CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/),
Expand All @@ -29,10 +29,10 @@ this process may become less painful).
The current process would look something like:

1. Backup the instances
2. Delete the instances
3. Deploy the new type definitions
4. Update the backups with the new spec
5. Deploy the updated backups
1. Delete the instances
1. Deploy the new type definitions
1. Update the backups with the new spec
1. Deploy the updated backups

_This policy does not yet cover other functionality which could be considered
part of the API, but isn’t part of the CRD definition (e.g. a contract re. files
Expand All @@ -52,10 +52,10 @@ particularly to support embedding of Build resources within
## `TaskRun`, `Task`, and `ClusterTask`

The CRD types
[`Task`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#task),
[`ClusterTask`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#clustertask),
[`Task`](https://github.com/knative/build-pipeline/blob/master/docs/tasks.md),
[`ClusterTask`](https://github.com/knative/build-pipeline/blob/master/docs/tasks.md#clustertask),
and
[`TaskRun`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#taskrun)
[`TaskRun`](https://github.com/knative/build-pipeline/blob/master/docs/taskruns.md)
should be considered `alpha`, however these types are more stable than
`Pipeline`, `PipelineRun`, and `PipelineResource`.

Expand Down Expand Up @@ -85,10 +85,10 @@ between releases.
## `PipelineRun`, `Pipeline` and `PipelineResource`

The CRD types
[`Pipeline`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#pipeline),
[`PipelineRun`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#pipelinerun)
[`Pipeline`](https://github.com/knative/build-pipeline/blob/master/docs/pipelines.md),
[`PipelineRun`](https://github.com/knative/build-pipeline/blob/master/docs/pipelines.md)
and
[`PipelineResource`](https://github.com/knative/build-pipeline/blob/master/docs/Concepts.md#pipelineresources)
[`PipelineResource`](https://github.com/knative/build-pipeline/blob/master/docs/resources.md#pipelineresources)
should be considered `alpha`, i.e. the API should be considered unstable.
Backwards incompatible changes can be introduced between releases, however they
must include a backwards incompatibility warning in the release notes.
Expand Down
2 changes: 1 addition & 1 deletion docs/auth.md
Original file line number Diff line number Diff line change
Expand Up @@ -331,7 +331,7 @@ Given URLs, usernames, and passwords of the form: `https://url{n}.com`,
```
=== ~/.gitconfig ===
[credential]
helper = store
helper = store
[credential "https://url1.com"]
username = "user1"
[credential "https://url2.com"]
Expand Down
2 changes: 1 addition & 1 deletion docs/container-contract.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Container Contract

Each container image used as a step in a [`Task`](task.md) must comply with a
Each container image used as a step in a [`Task`](tasks.md) must comply with a
specific contract.

## Entrypoint
Expand Down
6 changes: 3 additions & 3 deletions docs/developers/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
This document is aimed at helping maintainers/developers of project understand
the complexity.

## How are resources shared between tasks?
## How are resources shared between tasks

`PipelineRun` uses PVC to share resources between tasks. PVC volume is mounted
on path `/pvc` by PipelineRun.
Expand Down Expand Up @@ -33,15 +33,15 @@ creation of a persistent volume could be slower than uploading/downloading files
to a bucket, or if the the cluster is running in multiple zones, the access to
the persistent volume can fail.

## How are inputs handled?
## How are inputs handled

Input resources, like source code (git) or artifacts, are dumped at path
`/workspace/task_resource_name`. Resource definition in task can have custom
target directory. If `targetPath` is mentioned in task input then the
controllers are responsible for adding container definitions to create
directories and also to fetch the versioned artifacts into that directory.

## How are outputs handled?
## How are outputs handled

Output resources, like source code (git) or artifacts (storage resource), are
expected in directory path `/workspace/output/resource_name`.
Expand Down
6 changes: 4 additions & 2 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,11 @@ To add the Tekton Pipelines component to an existing cluster:
command to install
[Tekton Pipelines](https://github.com/knative/build-pipeline) and its
dependencies:

```bash
kubectl apply --filename https://storage.googleapis.com/knative-releases/build-pipeline/latest/release.yaml
```

1. Run the
[`kubectl get`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get)
command to monitor the Tekton Pipelines components until all of the
Expand All @@ -35,11 +37,11 @@ You are now ready to create and run Tekton Pipelines:

## Configuring Tekton Pipelines

### How are resources shared between tasks?
### How are resources shared between tasks

Pipelines need a way to share resources between tasks. The alternatives are a
[Persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
or a (GCS storage bucket)[https://cloud.google.com/storage/]
or a [GCS storage bucket](https://cloud.google.com/storage/)

The PVC option does not require any configuration, but the GCS storage bucket
can be configured using a ConfigMap with the name `config-artifact-bucket` with
Expand Down
1 change: 1 addition & 0 deletions docs/pipelineruns.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ following fields:
scheduled on that node. More info:
<https://kubernetes.io/docs/concepts/configuration/assign-pod-node/>
- [`affinity`] - The pod's scheduling constraints. More info:

<https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature>

[kubernetes-overview]:
Expand Down
24 changes: 13 additions & 11 deletions docs/resources.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# PipelineResources

`PipelinesResources` in a pipeline are the set of objects that are going to be
used as inputs to a [`Task`](task.md) and can be output by a `Task`.
used as inputs to a [`Task`](tasks.md) and can be output by a `Task`.

A `Task` can have multiple inputs and outputs.

Expand Down Expand Up @@ -39,7 +39,7 @@ following fields:
`PipelineResource`

[kubernetes-overview]:
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
<https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields>

## Resource Types

Expand Down Expand Up @@ -80,7 +80,7 @@ Params that can be added are the following:
1. `url`: represents the location of the git repository, you can use this to
change the repo, e.g. [to use a fork](#using-a-fork)
2. `revision`: Git
1. `revision`: Git
[revision](https://git-scm.com/docs/gitrevisions#_specifying_revisions)
(branch, tag, commit SHA or ref) to clone. You can use this to control what
commit [or branch](#using-a-branch) is used. _If no revision is specified,
Expand Down Expand Up @@ -132,15 +132,15 @@ spec:
### Image Resource

An Image resource represents an image that lives in a remote repository. It is
usually used as [a `Task` `output`](concepts.md#task) for `Tasks` that build
usually used as [a `Task` `output`](tasks.md#outputs) for `Tasks` that build
images. This allows the same `Tasks` to be used to generically push to any
registry.

Params that can be added are the following:

1. `url`: The complete path to the image, including the registry and the image
tag
2. `digest`: The
1. `digest`: The
[image digest](https://success.docker.com/article/images-tagging-vs-digests)
which uniquely identifies a particular build of an image with a particular
tag. _While this can be provided as a parameter, there is not yet a way to
Expand Down Expand Up @@ -314,10 +314,11 @@ spec:
Params that can be added are the following:

1. `location`: represents the location of the blob storage.
2. `type`: represents the type of blob storage. For GCS storage resource this
1. `type`: represents the type of blob storage. For GCS storage resource this
value should be set to `gcs`.
3. `dir`: represents whether the blob storage is a directory or not. By default
1. `dir`: represents whether the blob storage is a directory or not. By default
storage artifact is considered not a directory.

- If artifact is a directory then `-r`(recursive) flag is used to copy all
files under source directory to GCS bucket. Eg:
`gsutil cp -r source_dir gs://some-bucket`
Expand All @@ -335,13 +336,13 @@ service account.
[official documentation](https://cloud.google.com/compute/docs/access/service-accounts)
on how to create service accounts and configuring IAM permissions to access
bucket.
2. Create a Kubernetes secret from downloaded service account json key
1. Create a Kubernetes secret from downloaded service account json key

```bash
kubectl create secret generic bucket-sa --from-file=./service_account.json
```

3. To access GCS private bucket environment variable
1. To access GCS private bucket environment variable
[`GOOGLE_APPLICATION_CREDENTIALS`](https://cloud.google.com/docs/authentication/production)
should be set so apply above created secret to the GCS storage resource under
`fieldName` key.
Expand Down Expand Up @@ -407,10 +408,11 @@ spec:
Params that can be added are the following:

1. `location`: represents the location of the blob storage.
2. `type`: represents the type of blob storage. For BuildGCS, this value should
1. `type`: represents the type of blob storage. For BuildGCS, this value should
be set to `build-gcs`
3. `artifactType`: represent the type of GCS resource. Right now, we support
1. `artifactType`: represent the type of GCS resource. Right now, we support
following types:

- `Archive`:
- Archive indicates that resource fetched is an archive file. Currently,
Build GCS resource only supports `.zip` archive.
Expand Down
10 changes: 5 additions & 5 deletions docs/taskruns.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,12 +53,12 @@ following fields:
- [`nodeSelector`] - a selector which must be true for the pod to fit on a
node. The selector which must match a node's labels for the pod to be
scheduled on that node. More info:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
<https://kubernetes.io/docs/concepts/configuration/assign-pod-node/>
- [`affinity`] - the pod's scheduling constraints. More info:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
<https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature>

[kubernetes-overview]:
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
<https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields>

### Specifying a task

Expand Down Expand Up @@ -372,8 +372,8 @@ the `Task`.
### Example Task Reuse

For the sake of illustrating re-use, here are several example
[`TaskRuns`](taskrun.md) (including referenced
[`PipelineResources`](resource.md)) instantiating the
[`TaskRuns`](taskruns.md) (including referenced
[`PipelineResources`](resources.md)) instantiating the
[`Task` (`dockerfile-build-and-push`) in the `Task` example docs](tasks.md#example-task).

Build `mchmarny/rester-tester`:
Expand Down
2 changes: 1 addition & 1 deletion docs/tasks.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ following fields:
available to your build.

[kubernetes-overview]:
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
<https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields>

The following example is a non-working sample where most of the possible
configuration fields are used:
Expand Down
4 changes: 2 additions & 2 deletions hack/release.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,12 +71,12 @@ for your GitHub username, password, and possibly 2-factor authentication
challenge before the release is published.

Since we are currently using
[the knative release scripts](vendor/github.com/knative/test-infra/scripts/release.sh#L404)
[the knative release scripts](../vendor/github.com/knative/test-infra/scripts/release.sh#L404)
the title of the release will be _Knative Build Pipeline release vX.Y.Z_ and we
will manually need to change this to _Tekton Pipeline release vX.Y.Z_. It will
also be tagged _vX.Y.Z_ (both on GitHub and as a git annotated tag).

#### Release notes
### Release notes

Release notes will need to be manually collected for the release by looking at
the `Release Notes` section of every PR which has been merged between the last
Expand Down
2 changes: 1 addition & 1 deletion test/logs/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# How to follow log outputs?
# How to follow log outputs

- [How to follow PipelineRun logs?](#pipelinerun)
- [How to follow TaskRun logs?](#taskrun)
Expand Down
11 changes: 11 additions & 0 deletions test/markdown-lint-config.rc
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# For help, see
# https://github.com/markdownlint/markdownlint/blob/master/docs/configuration.md

# The following rules are ignored
# MD004: Unordered list style
# MD005: Inconsistent indentation for list items at the same level
# MD007: Unsorted list indentation
# MD013: Ignore long lines
# MD036: Emphasis used instead of a header
# MD039: Spaces inside link text
rules "~MD004", "~MD005", "~MD007", "~MD013", "~MD036", "~MD039"

0 comments on commit f0c1f80

Please sign in to comment.