From 827e6e6a77027f5747d869cd8d6f93e323a7c074 Mon Sep 17 00:00:00 2001 From: Parth Patel <88045217+pxp928@users.noreply.github.com> Date: Thu, 17 Feb 2022 10:49:32 -0500 Subject: [PATCH] pull in upstream changes (#5) * cleanup - ApplyContext parameters Instead of passing around the entire resolvedTaskResources, which is not necessary at this point, just pass the task name. No functional changes expected. * use podtemplate imagepullsecrets to resolve entrypoint * Update write_test.go Fixed a typo * Fix links to Why Aren't PipelineResources in Beta? Links to the "Why Aren't PipelineResources in Beta?" section in the docs should have `aren-t` in the fragment instead of `arent`. This can be confirmed by clicking the link icon beside the heading and checking the browser address bar. * Fix tekton_pipelines_controller_taskrun_count recount bug Added before and after condition check to avoid taskrun metrics recount bug. * debug is an alpha feature Documenting that the debug feature is still alpha. The feature was introduced in pipelines release 0.26 behind enable-api-fields flag. * Consider osversion when determining platform uniqueness Prior to this change, an image (such as `golang:1.17`) that provided two images that shared the same OS+architecture+variant would be considered invalid, even if they described two different images whose platforms differed on, for example, osversion (used by Windows images). This change relaxes our platform uniqueness logic to take this into account, unblocking Linux users from running such images. There's still an issue for Windows users however, since when they attempt to run these images they'll fail to find the correct command taking into account their osversion. Workarounds in this case include specifying a single-platform image, or avoiding multi-platform images that provide two Windows images differing only by osversion. This also updates our selection logic to take into account slightly malformed multi-platform images that specify two images with the same OS+architecture[+variant], so long as the duplicate entries describe the same image by digest (e.g., anchore/syft:v0.37.10) * [TEP-0059] Scope `when` expressions to `Task` only In [TEP-0007: Conditions Beta][tep-0007], we introduced `when` expressions to guard execution of `Tasks` in `Pipelines`. To align with `Conditions`, we set scope of `when` expressions to the guarded `Task` and its dependent `Tasks`. In [TEP-0059: Skipping Strategies][tep-0059], we proposed changing the scope of `when` expressions to the guarded `Task` only. This was implemented in https://github.com/tektoncd/pipeline/pull/4085. We provided a feature flag, `scope-when-expressions-to-task`, to support migration. It defaulted to `false` for 9 months per our [Beta API compatibility policy][policy], meaning that we continued to guard the `Task` and its dependent `Tasks`. In this change, we flip the flag to `true` to guard the `Task` only by default. [tep-0007]: https://github.com/tektoncd/community/blob/main/teps/0007-conditions-beta.md [tep-0059]: https://github.com/tektoncd/community/blob/main/teps/0059-skipping-strategies.md [policy]: https://github.com/tektoncd/pipeline/blob/main/api_compatibility_policy.md * Update the `scope-when-expressions-to-task` feature flag docs In https://github.com/tektoncd/pipeline/pull/4580, we changed the flag default from "false" to "true". However, the documentation above the flag was still describing what setting it to "true" would do. In this change, we update the documentation to focus on the non-default option that users can choose to set - "false". We also add a reference to TEP-0059 and relevant docs for more details. * Patch temp GOPATH hack script to handle nounset option Prior to this commit the setup-temporary-gopath.sh used the GOPATH variable without first checking that it was set. When `set -o nounset` is working this causes the script to exit with an error. This commit adds a variable wrapping $GOPATH and setting a default if it's missing, which should work around the `nounset`. * use helper functions - MarkResource* Replace updating the conditions directly with the helper functions - MarkResourceRunning and MarkRunning. No functional change expected. * Update the deprecations table The tekton.dev/task label for ClusterTasks have been removed in https://github.com/tektoncd/pipeline/issues/2533, but the table has not been updated yet, so doing it in here. Signed-off-by: Andrea Frittoli * Remove deprecated flags home-env and working-dir This change removes two flags: - disable-home-env-overwrite - disable-working-dir-overwrite That two flags that were originally introduced with default to false and the feature associated to them was deprecated. Nine months later (as per policy), in Dec 2020, the default value was switched to default true and the flags were deprecated. Nine months later we are finally removing the flags. Signed-off-by: Andrea Frittoli * Fix for some arm64 machines. As said in GoogleContainerTools/distroless#657, in the past, distroless/base:debug used an arm32 busybox binary in its arm64 image. Which doesn't work on some arm64 machines, e.g., Ubuntu 21 arm64 on Parallel Desktop on Apple Silicon M1. It caused this error: " $ docker run -it gcr.io/distroless/base@sha256:cfdc553400d41b47fd231b028403469811fcdbc0e69d66ea8030c5a0b5fbac2b standard_init_linux.go:228: exec user process caused: exec format error " This PR GoogleContainerTools/distroless#960 fixes this bug. Hence, update the distroless/base:debug used by Tekton Pipeline in this commit. * Add Step and Sidecar Overrides to TaskRun API This commit adds TaskRunStepOverrides and TaskRunSidecarOverrides to TaskRun.Spec and PipelineRun.Spec.PipelineTaskRunSpec, gated behind the "alpha" API flag. This is part 1 of implementing TEP-0094: Configuring Resource Requirements at Runtime. https://github.com/tektoncd/community/blob/main/teps/0094-configuring-resources-at-runtime.md * WIP spire. Signed-off-by: Dan Lorenc changed to use spiffe-csi Add pod SPIFFE id annotation for workload registrar Signed-off-by: Brandon Lum removed spire jwt updated obtaining trust bundle Added SPIFFE entry registration and SVID entrypointer backoff (#2) * Added SPIFFE entry registration and SVID entrypointer backoff Signed-off-by: Brandon Lum * Allow SPIRE configuration through opts Signed-off-by: Brandon Lum * Add validation of SpireConfig Signed-off-by: Brandon Lum * merged upstream Signed-off-by: pxp928 * added manifest check * [WIP] Add SPIRE docs (#4) * merged upstream * Add several features/optimizations for SPIRE (#3) * Record pod latency before SPIRE entry creation Signed-off-by: Brandon Lum * SPIRE client connection caching Signed-off-by: Brandon Lum * Optimize spire entry creation Signed-off-by: Brandon Lum * Add TTL for workload entry based on taskrun timeout Signed-off-by: Brandon Lum * Add SPIRE non-falsification doc Signed-off-by: Brandon Lum Co-authored-by: pxp928 * merged upstream Signed-off-by: pxp928 Co-authored-by: pritidesai Co-authored-by: Yongxuan Zhang Co-authored-by: Anupama Baskar Co-authored-by: Alan Greene Co-authored-by: Khurram Baig Co-authored-by: Jason Hall Co-authored-by: Jerop Co-authored-by: Scott Co-authored-by: Andrea Frittoli Co-authored-by: Meng-Yuan Huang Co-authored-by: Lee Bernick Co-authored-by: Dan Lorenc Co-authored-by: Brandon Lum --- cmd/entrypoint/main.go | 4 + config/config-feature-flags.yaml | 12 - config/controller.yaml | 4 +- docs/debug.md | 3 + docs/deprecations.md | 4 - docs/install.md | 14 +- docs/migrating-v1alpha1-to-v1beta1.md | 2 +- docs/pipelineruns.md | 2 + docs/pipelines.md | 30 +- docs/podtemplates.md | 34 ++ docs/resources.md | 4 +- docs/taskruns.md | 54 ++- go.mod | 10 +- go.sum | 24 +- hack/setup-temporary-gopath.sh | 7 +- pkg/apis/config/feature_flags.go | 16 +- pkg/apis/config/feature_flags_test.go | 10 - .../testdata/feature-flags-all-flags-set.yaml | 2 - pkg/apis/config/testdata/feature-flags.yaml | 2 - .../pipeline/v1beta1/openapi_generated.go | 122 ++++++- .../pipeline/v1beta1/pipelinerun_types.go | 8 +- .../v1beta1/pipelinerun_validation.go | 24 ++ .../v1beta1/pipelinerun_validation_test.go | 127 +++++++ pkg/apis/pipeline/v1beta1/swagger.json | 70 ++++ pkg/apis/pipeline/v1beta1/taskrun_types.go | 28 ++ .../pipeline/v1beta1/taskrun_validation.go | 79 +++-- .../v1beta1/taskrun_validation_test.go | 114 ++++++- .../pipeline/v1beta1/zz_generated.deepcopy.go | 62 ++++ pkg/internal/deprecated/override.go | 85 ----- pkg/internal/deprecated/override_test.go | 309 ------------------ pkg/pod/entrypoint_lookup.go | 6 +- pkg/pod/entrypoint_lookup_impl.go | 61 ++-- pkg/pod/entrypoint_lookup_impl_test.go | 264 +++++++++++++++ pkg/pod/entrypoint_lookup_test.go | 4 +- pkg/pod/pod.go | 16 +- pkg/pod/pod_test.go | 2 - pkg/reconciler/pipelinerun/pipelinerun.go | 12 +- .../pipelinerun/pipelinerun_test.go | 6 +- pkg/reconciler/taskrun/resources/apply.go | 4 +- .../taskrun/resources/apply_test.go | 41 +-- pkg/reconciler/taskrun/taskrun.go | 34 +- pkg/reconciler/taskrun/taskrun_test.go | 134 -------- pkg/taskrunmetrics/metrics.go | 13 +- pkg/taskrunmetrics/metrics_test.go | 118 ++++++- pkg/termination/write_test.go | 2 +- third_party/LICENSE | 27 ++ .../vendor/golang.org/x/crypto/LICENSE | 27 ++ third_party/vendor/golang.org/x/net/LICENSE | 27 ++ .../vendor/golang.org/x/sys/cpu/LICENSE | 27 ++ third_party/vendor/golang.org/x/text/LICENSE | 27 ++ .../pkg/authn/keychain.go | 11 +- .../pkg/v1/google/auth.go | 3 +- .../pkg/v1/google/keychain.go | 31 +- .../pkg/v1/layout/write.go | 13 +- .../go-containerregistry/pkg/v1/platform.go | 54 ++- .../pkg/v1/remote/options.go | 2 +- .../pkg/v1/remote/transport/ping.go | 39 ++- .../pkg/v1/tarball/layer.go | 13 +- .../pkg/v1/zz_deepcopy_generated.go | 1 + .../klauspost/compress/.goreleaser.yml | 4 + .../github.com/klauspost/compress/README.md | 14 + .../klauspost/compress/huff0/decompress.go | 234 +++++++------ .../klauspost/compress/zstd/bitreader.go | 15 +- .../klauspost/compress/zstd/bitwriter.go | 22 +- .../klauspost/compress/zstd/blockdec.go | 24 +- .../klauspost/compress/zstd/blockenc.go | 108 +++--- .../klauspost/compress/zstd/decodeheader.go | 84 +++-- .../klauspost/compress/zstd/enc_base.go | 24 +- .../klauspost/compress/zstd/enc_fast.go | 139 +------- .../compress/zstd/encoder_options.go | 10 +- .../klauspost/compress/zstd/fse_decoder.go | 2 +- .../klauspost/compress/zstd/fse_encoder.go | 5 +- .../zstd/internal/xxhash/xxhash_amd64.s | 1 + .../zstd/internal/xxhash/xxhash_arm64.s | 186 +++++++++++ .../xxhash/{xxhash_amd64.go => xxhash_asm.go} | 8 +- .../zstd/internal/xxhash/xxhash_other.go | 4 +- .../klauspost/compress/zstd/seqdec.go | 4 +- .../grpc/attributes/attributes.go | 4 +- .../grpc/credentials/insecure/insecure.go | 5 - vendor/google.golang.org/grpc/dialoptions.go | 4 +- .../grpc/grpclog/loggerv2.go | 8 +- .../grpc/internal/envconfig/xds.go | 7 + .../grpc/internal/grpclog/grpclog.go | 8 +- .../grpc/internal/grpcutil/regex.go | 11 +- vendor/google.golang.org/grpc/regenerate.sh | 18 +- vendor/google.golang.org/grpc/version.go | 2 +- vendor/modules.txt | 16 +- 87 files changed, 2071 insertions(+), 1185 deletions(-) delete mode 100644 pkg/internal/deprecated/override.go delete mode 100644 pkg/internal/deprecated/override_test.go create mode 100644 pkg/pod/entrypoint_lookup_impl_test.go create mode 100644 third_party/LICENSE create mode 100644 third_party/vendor/golang.org/x/crypto/LICENSE create mode 100644 third_party/vendor/golang.org/x/net/LICENSE create mode 100644 third_party/vendor/golang.org/x/sys/cpu/LICENSE create mode 100644 third_party/vendor/golang.org/x/text/LICENSE create mode 100644 vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_arm64.s rename vendor/github.com/klauspost/compress/zstd/internal/xxhash/{xxhash_amd64.go => xxhash_asm.go} (51%) diff --git a/cmd/entrypoint/main.go b/cmd/entrypoint/main.go index 00575104930..5fd6cb67cf5 100644 --- a/cmd/entrypoint/main.go +++ b/cmd/entrypoint/main.go @@ -110,6 +110,10 @@ func main() { if err := json.Unmarshal([]byte(env), &cmds); err != nil { log.Fatal(err) } + // NB: This value contains OS/architecture and maybe variant. + // It doesn't include osversion, which is necessary to + // disambiguate two images both for e.g., Windows, that only + // differ by osversion. plat := platforms.DefaultString() var err error cmd, err = selectCommandForPlatform(cmds, plat) diff --git a/config/config-feature-flags.yaml b/config/config-feature-flags.yaml index c96c7b5648a..f1fbede9df2 100644 --- a/config/config-feature-flags.yaml +++ b/config/config-feature-flags.yaml @@ -30,18 +30,6 @@ data: # https://github.com/tektoncd/pipeline/blob/main/docs/workspaces.md#affinity-assistant-and-specifying-workspace-order-in-a-pipeline # or https://github.com/tektoncd/pipeline/pull/2630 for more info. disable-affinity-assistant: "false" - # Setting this flag to "false" will allow Tekton to override your - # Task container's $HOME environment variable. - # - # See https://github.com/tektoncd/pipeline/issues/2013 for more - # info. - disable-home-env-overwrite: "true" - # Setting this flag to "false" will allow Tekton to override your - # Task container's working directory. - # - # See https://github.com/tektoncd/pipeline/issues/1836 for more - # info. - disable-working-directory-overwrite: "true" # Setting this flag to "true" will prevent Tekton scanning attached # service accounts and injecting any credentials it finds into your # Steps. diff --git a/config/controller.yaml b/config/controller.yaml index 58406b0031b..6dda19db436 100644 --- a/config/controller.yaml +++ b/config/controller.yaml @@ -76,9 +76,9 @@ spec: # This is gcr.io/google.com/cloudsdktool/cloud-sdk:302.0.0-slim "-gsutil-image", "gcr.io/google.com/cloudsdktool/cloud-sdk@sha256:27b2c22bf259d9bc1a291e99c63791ba0c27a04d2db0a43241ba0f1f20f4067f", # The shell image must be root in order to create directories and copy files to PVCs. - # gcr.io/distroless/base:debug as of October 21, 2021 + # gcr.io/distroless/base:debug as of February 17, 2022 # image shall not contains tag, so it will be supported on a runtime like cri-o - "-shell-image", "gcr.io/distroless/base@sha256:cfdc553400d41b47fd231b028403469811fcdbc0e69d66ea8030c5a0b5fbac2b", + "-shell-image", "gcr.io/distroless/base@sha256:3cebc059e7e52a4f5a389aa6788ac2b582227d7953933194764ea434f4d70d64", # for script mode to work with windows we need a powershell image # pinning to nanoserver tag as of July 15 2021 "-shell-image-win", "mcr.microsoft.com/powershell:nanoserver@sha256:b6d5ff841b78bdf2dfed7550000fd4f3437385b8fa686ec0f010be24777654d6", diff --git a/docs/debug.md b/docs/debug.md index 3d110fc68cc..0e729577e8c 100644 --- a/docs/debug.md +++ b/docs/debug.md @@ -23,6 +23,9 @@ weight: 11 `Debug` spec is used for troubleshooting and breakpointing runtime resources. This doc helps understand the inner workings of debug in Tekton. Currently only the `TaskRun` resource is supported. +This is an alpha feature. The `enable-api-fields` feature flag [must be set to `"alpha"`](./install.md) +to specify `debug` in a `taskRun`. + ## Debugging TaskRuns The following provides explanation on how Debugging TaskRuns is possible through Tekton. To understand how to use diff --git a/docs/deprecations.md b/docs/deprecations.md index a3061c33378..b436af89d3f 100644 --- a/docs/deprecations.md +++ b/docs/deprecations.md @@ -19,12 +19,8 @@ being deprecated. | Feature Being Deprecated | Deprecation Announcement | [API Compatibility Policy](https://github.com/tektoncd/pipeline/tree/main/api_compatibility_policy.md) | Earliest Date or Release of Removal | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ | ----------------------------------- | -| [`tekton.dev/task` label on ClusterTasks](https://github.com/tektoncd/pipeline/issues/2533) | [v0.12.0](https://github.com/tektoncd/pipeline/releases/tag/v0.12.0) | Beta | January 30 2021 | | [The `TaskRun.Status.ResourceResults.ResourceRef` field is deprecated and will be removed.](https://github.com/tektoncd/pipeline/issues/2694) | [v0.14.0](https://github.com/tektoncd/pipeline/releases/tag/v0.14.0) | Beta | April 30 2021 | | [The `PipelineRun.Spec.ServiceAccountNames` field is deprecated and will be removed.](https://github.com/tektoncd/pipeline/issues/2614) | [v0.15.0](https://github.com/tektoncd/pipeline/releases/tag/v0.15.0) | Beta | May 15 2021 | | [`Conditions` CRD is deprecated and will be removed. Use `when` expressions instead.](https://github.com/tektoncd/community/blob/main/teps/0007-conditions-beta.md) | [v0.16.0](https://github.com/tektoncd/pipeline/releases/tag/v0.16.0) | Alpha | Nov 02 2020 | -| [The `disable-home-env-overwrite` flag will be removed](https://github.com/tektoncd/pipeline/issues/2013) | [v0.24.0](https://github.com/tektoncd/pipeline/releases/tag/v0.24.0) | Beta | February 10 2022 | -| [The `disable-working-dir-overwrite` flag will be removed](https://github.com/tektoncd/pipeline/issues/1836) | [v0.24.0](https://github.com/tektoncd/pipeline/releases/tag/v0.24.0) | Beta | February 10 2022 | -| [The `scope-when-expressions-to-task` flag will be flipped from "false" to "true"](https://github.com/tektoncd/pipeline/issues/4461) | [v0.27.0](https://github.com/tektoncd/pipeline/releases/tag/v0.27.0) | Beta | February 10 2022 | | [The `scope-when-expressions-to-task` flag will be removed](https://github.com/tektoncd/pipeline/issues/4461) | [v0.27.0](https://github.com/tektoncd/pipeline/releases/tag/v0.27.0) | Beta | March 10 2022 | | [`PipelineResources` are deprecated.](https://github.com/tektoncd/community/blob/main/teps/0074-deprecate-pipelineresources.md) | [v0.30.0](https://github.com/tektoncd/pipeline/releases/tag/v0.30.0) | Alpha | Dec 20 2021 | diff --git a/docs/install.md b/docs/install.md index 0267f9d4a11..9828bfeedd9 100644 --- a/docs/install.md +++ b/docs/install.md @@ -338,14 +338,6 @@ To customize the behavior of the Pipelines Controller, modify the ConfigMap `fea node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes are missing the specified `topologyKey` label, it can lead to unintended behavior. -- `disable-home-env-overwrite` - set this flag to `false` to allow Tekton -to override the `$HOME` environment variable for the containers executing your `Steps`. -The default is `true`. For more information, see the [associated issue](https://github.com/tektoncd/pipeline/issues/2013). - -- `disable-working-directory-overwrite` - set this flag to `false` to allow Tekton -to override the working directory for the containers executing your `Steps`. -The default value is `true`. For more information, see the [associated issue](https://github.com/tektoncd/pipeline/issues/1836). - - `running-in-environment-with-injected-sidecars`: set this flag to `"true"` to allow the Tekton controller to set the `tekton.dev/ready` annotation at pod creation time for TaskRuns with no Sidecars specified. Enabling this option should decrease the time it takes for a TaskRun to @@ -378,7 +370,7 @@ most stable features to be used. Set it to "alpha" to allow [alpha features](#alpha-features) to be used. - `scope-when-expressions-to-task`: set this flag to "true" to scope `when` expressions to guard a `Task` only. Set it - to "false" to guard a `Task` and its dependent `Tasks`. It defaults to "false". For more information, see [guarding + to "false" to guard a `Task` and its dependent `Tasks`. It defaults to "true". For more information, see [guarding `Task` execution using `when` expressions](pipelines.md#guard-task-execution-using-whenexpressions). For example: @@ -389,8 +381,6 @@ kind: ConfigMap metadata: name: feature-flags data: - disable-home-env-overwrite: "true" # Tekton will not override the $HOME variable for individual Steps. - disable-working-directory-overwrite: "true" # Tekton will not override the working directory for individual Steps. enable-api-fields: "alpha" # Allow alpha fields to be used in Tasks and Pipelines. ``` @@ -413,6 +403,8 @@ Features currently in "alpha" are: | [Implicit `Parameters`](./taskruns.md#implicit-parameters) | [TEP-0023](https://github.com/tektoncd/community/blob/main/teps/0023-implicit-mapping.md) | [v0.28.0](https://github.com/tektoncd/pipeline/releases/tag/v0.28.0) | | | [Windows Scripts](./tasks.md#windows-scripts) | [TEP-0057](https://github.com/tektoncd/community/blob/main/teps/0057-windows-support.md) | [v0.28.0](https://github.com/tektoncd/pipeline/releases/tag/v0.28.0) | | | [Remote Tasks](./taskruns.md#remote-tasks) and [Remote Pipelines](./pipelineruns.md#remote-pipelines) | [TEP-0060](https://github.com/tektoncd/community/blob/main/teps/0060-remote-resolutiond.md) | | | +| [Debug](./debug.md) | [TEP-0042](https://github.com/tektoncd/community/blob/main/teps/0042-taskrun-breakpoint-on-failure.md) | [v0.26.0](https://github.com/tektoncd/pipeline/releases/tag/v0.26.0) | | +| [Step and Sidecar Overrides](./taskruns.md#overriding-task-steps-and-sidecars)| [TEP-0094](https://github.com/tektoncd/community/blob/main/teps/0094-specifying-resource-requirements-at-runtime.md) | | | ## Configuring High Availability diff --git a/docs/migrating-v1alpha1-to-v1beta1.md b/docs/migrating-v1alpha1-to-v1beta1.md index 3e349a67efe..99b84ec96d6 100644 --- a/docs/migrating-v1alpha1-to-v1beta1.md +++ b/docs/migrating-v1alpha1-to-v1beta1.md @@ -80,7 +80,7 @@ Since then, **`PipelineResources` have been deprecated**. We encourage users to features instead of `PipelineResources`. Read more about the deprecation in [TEP-0074](https://github.com/tektoncd/community/blob/main/teps/0074-deprecate-pipelineresources.md). _More on the reasoning and what's left to do in -[Why aren't PipelineResources in Beta?](resources.md#why-arent-pipelineresources-in-beta)._ +[Why aren't PipelineResources in Beta?](resources.md#why-aren-t-pipelineresources-in-beta)._ To ease migration away from `PipelineResources` [some types have an equivalent `Task` in the Catalog](#replacing-pipelineresources-with-tasks). diff --git a/docs/pipelineruns.md b/docs/pipelineruns.md index 7ab3c97081b..1a2ee740e0b 100644 --- a/docs/pipelineruns.md +++ b/docs/pipelineruns.md @@ -472,6 +472,8 @@ spec: ``` If used with this `Pipeline`, `build-task` will use the task specific `PodTemplate` (where `nodeSelector` has `disktype` equal to `ssd`). +`PipelineTaskRunSpec` may also contain `StepOverrides` and `SidecarOverrides`; see +[Overriding `Task` `Steps` and `Sidecars`](./taskruns.md#overriding-task-steps-and-sidecars) for more information. ### Specifying `Workspaces` diff --git a/docs/pipelines.md b/docs/pipelines.md index de9bae82891..86ffe7ca74a 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -496,14 +496,24 @@ There are a lot of scenarios where `when` expressions can be really useful. Some #### Guarding a `Task` and its dependent `Tasks` -When `when` expressions evaluate to `False`, the `Task` and its dependent `Tasks` will be skipped by default while the -rest of the `Pipeline` will execute. Dependencies between `Tasks` can be either ordering ([`runAfter`](https://github.com/tektoncd/pipeline/blob/main/docs/pipelines.md#using-the-runafter-parameter)) +> :warning: **Scoping `when` expressions to a `Task` and its dependent `Tasks` is deprecated.** +> +> Consider migrating to scoping `when` expressions to the guarded `Task` only instead. +> Read more in the [documentation](#guarding-a-task-only) and [TEP-0059: Skipping Strategies][tep-0059]. +> +[tep-0059]: https://github.com/tektoncd/community/blob/main/teps/0059-skipping-strategies.md + +To guard a `Task` and its dependent `Tasks`, set the global default scope of `when` expressions to `Task` using the +`scope-when-expressions-to-task` field in [`config/config-feature-flags.yaml`](install.md#customizing-the-pipelines-controller-behavior) +by changing it to "false". + +When `when` expressions evaluate to `False`, and `scope-when-expressions-to-task` is set to "false", the `Task` and +its dependent `Tasks` will be skipped while the rest of the `Pipeline` will execute. Dependencies between `Tasks` can +be either ordering ([`runAfter`](https://github.com/tektoncd/pipeline/blob/main/docs/pipelines.md#using-the-runafter-parameter)) or resource (e.g. [`Results`](https://github.com/tektoncd/pipeline/blob/main/docs/pipelines.md#using-results)) dependencies, as further described in [configuring execution order](#configuring-the-task-execution-order). The global -default scope of `when` expressions is set to a `Task` and its dependent`Tasks`; `scope-when-expressions-to-task` field -in [`config/config-feature-flags.yaml`](install.md#customizing-the-pipelines-controller-behavior) defaults to "false". - -**Note:** Scoping `when` expressions to a `Task` and its dependent `Tasks` is deprecated +default scope of `when` expressions is set to a `Task` only; `scope-when-expressions-to-task` field in +[`config/config-feature-flags.yaml`](install.md#customizing-the-pipelines-controller-behavior) defaults to "true". To guard a `Task` and its dependent Tasks: - cascade the `when` expressions to the specific dependent `Tasks` to be guarded as well @@ -646,9 +656,7 @@ tasks: #### Guarding a `Task` only -To guard a `Task` only and unblock execution of its dependent `Tasks`, set the global default scope of `when` expressions -to `Task` using the `scope-when-expressions-to-task` field in [`config/config-feature-flags.yaml`](install.md#customizing-the-pipelines-controller-behavior) -by changing it to "true" +When `when` expressions evaluate to `False`, the `Task` will be skipped and: - The ordering-dependent `Tasks` will be executed - The resource-dependent `Tasks` (and their dependencies) will be skipped because of missing `Results` from the skipped parent `Task`. When we add support for [default `Results`](https://github.com/tektoncd/community/pull/240), then the @@ -657,6 +665,8 @@ by changing it to "true" to handle the execution of the child `Task` in case the expected file is missing from the `Workspace` because the guarded parent `Task` is skipped. +On the other hand, the rest of the `Pipeline` will continue executing. + ``` tests | @@ -706,7 +716,7 @@ tasks: name: slack-msg ``` -With `when` expressions scoped to `Task`, if `manual-approval` is skipped, execution of it's dependent `Tasks` +With `when` expressions scoped to `Task`, if `manual-approval` is skipped, execution of its dependent `Tasks` (`slack-msg`, `build-image` and `deploy-image`) would be unblocked regardless: - `build-image` and `deploy-image` should be executed successfully - `slack-msg` will be skipped because it is missing the `approver` `Result` from `manual-approval` diff --git a/docs/podtemplates.md b/docs/podtemplates.md index f200bf2c79e..31fa98e093f 100644 --- a/docs/podtemplates.md +++ b/docs/podtemplates.md @@ -95,6 +95,40 @@ Pod templates support fields listed in the table below. +## Use `imagePullSecrets` to lookup entrypoint + +If no command is configured in `task` and `imagePullSecrets` is configured in `podTemplate`, the Tekton Controller will look up the entrypoint of image with `imagePullSecrets`. The Tekton controller's service account is given access to secrets by default. See [this](https://github.com/tektoncd/pipeline/blob/main/config/200-clusterrole.yaml) for reference. If the Tekton controller's service account is not granted the access to secrets in different namespace, you need to grant the access via `RoleBinding`: + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: creds-getter + namespace: my-ns +rules: +- apiGroups: [""] + resources: ["secrets"] + resourceNames: ["creds"] + verbs: ["get"] +``` + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: creds-getter-binding + namespace: my-ns +subjects: +- kind: ServiceAccount + name: tekton-pipelines-controller + namespace: tekton-pipelines + apiGroup: rbac.authorization.k8s.io +roleRef: + kind: Role + name: creds-getter + apiGroup: rbac.authorization.k8s.io +``` + --- Except as otherwise noted, the content of this page is licensed under the diff --git a/docs/resources.md b/docs/resources.md index 33055535b49..17f01a5001e 100644 --- a/docs/resources.md +++ b/docs/resources.md @@ -33,7 +33,7 @@ For example: > which lists each PipelineResource type and a suggested option for replacing it. > > For more information on why PipelineResources are remaining alpha [see the description -> of their problems, along with next steps, below](#why-arent-pipelineresources-in-beta). +> of their problems, along with next steps, below](#why-aren-t-pipelineresources-in-beta). -------------------------------------------------------------------------------- @@ -52,7 +52,7 @@ For example: - [Storage Resource](#storage-resource) - [GCS Storage Resource](#gcs-storage-resource) - [Cloud Event Resource](#cloud-event-resource) -- [Why Aren't PipelineResources in Beta?](#why-arent-pipelineresources-in-beta) +- [Why Aren't PipelineResources in Beta?](#why-aren-t-pipelineresources-in-beta) ## Syntax diff --git a/docs/taskruns.md b/docs/taskruns.md index d75d48e7255..fd5297bf508 100644 --- a/docs/taskruns.md +++ b/docs/taskruns.md @@ -21,6 +21,7 @@ weight: 300 - [Specifying a `Pod` template](#specifying-a-pod-template) - [Specifying `Workspaces`](#specifying-workspaces) - [Specifying `Sidecars`](#specifying-sidecars) + - [Overriding `Task` `Steps` and `Sidecars`](#overriding-task-steps-and-sidecars) - [Specifying `LimitRange` values](#specifying-limitrange-values) - [Configuring the failure timeout](#configuring-the-failure-timeout) - [Specifying `ServiceAccount` credentials](#specifying-serviceaccount-credentials) @@ -306,7 +307,8 @@ spec: ### Specifying `Resource` limits Each Step in a Task can specify its resource requirements. See -[Defining `Steps`](tasks.md#defining-steps) +[Defining `Steps`](tasks.md#defining-steps). Resource requirements defined in Steps and Sidecars +may be overridden by a TaskRun's StepOverrides and SidecarOverrides. ### Specifying a `Pod` template @@ -399,6 +401,56 @@ inside the `Pod`. Only the above command is affected. The `Pod's` description co denotes a "Failed" status and the container statuses correctly denote their exit codes and reasons. +### Overriding Task Steps and Sidecars + +**([alpha only](https://github.com/tektoncd/pipeline/blob/main/docs/install.md#alpha-features))** +**Warning: This feature is still under development and is not yet functional. Do not use it.** + +A TaskRun can specify `StepOverrides` or `SidecarOverrides` to override Step or Sidecar +configuration specified in a Task. + +For example, given the following Task definition: + +```yaml +apiVersion: tekton.dev/v1beta1 +kind: Task +metadata: + name: image-build-task +spec: + steps: + - name: build + image: gcr.io/kaniko-project/executor:latest + sidecars: + - name: logging + image: my-logging-image +``` + +An example TaskRun definition could look like: + +```yaml +apiVersion: tekton.dev/v1beta1 +kind: TaskRun +metadata: + name: image-build-taskrun +spec: + taskRef: + name: image-build-task + stepOverrides: + - name: build + resources: + requests: + memory: 1Gi + sidecarOverrides: + - name: logging + resources: + requests: + cpu: 100m + limits: + cpu: 500m +``` +`StepOverrides` and `SidecarOverrides` must include the `name` field and may include `resources`. +No other fields can be overridden. + ### Specifying `LimitRange` values In order to only consume the bare minimum amount of resources needed to execute one `Step` at a diff --git a/go.mod b/go.mod index 133567196f8..dff4e1cf2a4 100644 --- a/go.mod +++ b/go.mod @@ -4,27 +4,26 @@ go 1.16 require ( github.com/cloudevents/sdk-go/v2 v2.5.0 - github.com/containerd/containerd v1.5.8 + github.com/containerd/containerd v1.5.9 github.com/ghodss/yaml v1.0.0 github.com/google/go-cmp v0.5.7 - github.com/google/go-containerregistry v0.8.1-0.20220110151055-a61fd0a8e2bb + github.com/google/go-containerregistry v0.8.1-0.20220211173031-41f8d92709b7 github.com/google/go-containerregistry/pkg/authn/k8schain v0.0.0-20220120151853-ac864e57b117 github.com/google/uuid v1.3.0 github.com/hashicorp/go-multierror v1.1.1 github.com/hashicorp/golang-lru v0.5.4 github.com/jenkins-x/go-scm v1.10.10 github.com/mitchellh/go-homedir v1.1.0 - github.com/opencontainers/image-spec v1.0.3-0.20211202222133-eacdcc10569b + github.com/opencontainers/image-spec v1.0.3-0.20220114050600-8b9d41f48198 github.com/pkg/errors v0.9.1 github.com/spiffe/go-spiffe/v2 v2.0.0-beta.5 github.com/tektoncd/plumbing v0.0.0-20211012143332-c7cc43d9bc0c go.opencensus.io v0.23.0 go.uber.org/zap v1.19.1 golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 - golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect golang.org/x/time v0.0.0-20211116232009-f0f3c7e86c11 // indirect gomodules.xyz/jsonpatch/v2 v2.2.0 - google.golang.org/grpc v1.43.0 + google.golang.org/grpc v1.44.0 k8s.io/api v0.22.5 k8s.io/apimachinery v0.22.5 k8s.io/client-go v0.22.5 @@ -44,6 +43,5 @@ require ( github.com/google/go-containerregistry/pkg/authn/kubernetes v0.0.0-20220120123041-d22850aca581 // indirect github.com/spiffe/spire-api-sdk v1.2.0 go.uber.org/multierr v1.7.0 // indirect - golang.org/x/net v0.0.0-20220114011407-0dd24b26b47d // indirect k8s.io/utils v0.0.0-20211208161948-7d6a63dca704 // indirect ) diff --git a/go.sum b/go.sum index a4e6bd18eac..ece28e0b410 100644 --- a/go.sum +++ b/go.sum @@ -347,8 +347,9 @@ github.com/containerd/containerd v1.5.0-beta.3/go.mod h1:/wr9AVtEM7x9c+n0+stptlo github.com/containerd/containerd v1.5.0-beta.4/go.mod h1:GmdgZd2zA2GYIBZ0w09ZvgqEq8EfBp/m3lcVZIvPHhI= github.com/containerd/containerd v1.5.0-rc.0/go.mod h1:V/IXoMqNGgBlabz3tHD2TWDoTJseu1FGOKuoA4nNb2s= github.com/containerd/containerd v1.5.2/go.mod h1:0DOxVqwDy2iZvrZp2JUx/E+hS0UNTVn7dJnIOwtYR4g= -github.com/containerd/containerd v1.5.8 h1:NmkCC1/QxyZFBny8JogwLpOy2f+VEbO/f6bV2Mqtwuw= github.com/containerd/containerd v1.5.8/go.mod h1:YdFSv5bTFLpG2HIYmfqDpSYYTDX+mc5qtSuYx1YUb/s= +github.com/containerd/containerd v1.5.9 h1:rs6Xg1gtIxaeyG+Smsb/0xaSDu1VgFhOCKBXxMxbsF4= +github.com/containerd/containerd v1.5.9/go.mod h1:fvQqCfadDGga5HZyn3j4+dx56qj2I9YwBrlSdalvJYQ= github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y= github.com/containerd/continuity v0.0.0-20190815185530-f2a389ac0a02/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y= github.com/containerd/continuity v0.0.0-20191127005431-f65d91d395eb/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y= @@ -690,8 +691,9 @@ github.com/google/go-cmp v0.5.7 h1:81/ik6ipDQS2aGcBfIN5dHDB36BwrStyeAQquSYCV4o= github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE= github.com/google/go-containerregistry v0.6.0/go.mod h1:euCCtNbZ6tKqi1E72vwDj2xZcN5ttKpZLfa/wSo5iLw= github.com/google/go-containerregistry v0.8.0/go.mod h1:wW5v71NHGnQyb4k+gSshjxidrC7lN33MdWEn+Mz9TsI= -github.com/google/go-containerregistry v0.8.1-0.20220110151055-a61fd0a8e2bb h1:hdevkgIzFpx/Xbz+L2JB+UrmglBf0ZSBZo0tkzzh26s= github.com/google/go-containerregistry v0.8.1-0.20220110151055-a61fd0a8e2bb/go.mod h1:wW5v71NHGnQyb4k+gSshjxidrC7lN33MdWEn+Mz9TsI= +github.com/google/go-containerregistry v0.8.1-0.20220211173031-41f8d92709b7 h1:GWlUe7Hg6tCOnwT9wPxQZkgloM8/L7eWrTvAwHh7yK8= +github.com/google/go-containerregistry v0.8.1-0.20220211173031-41f8d92709b7/go.mod h1:cwx3SjrH84Rh9VFJSIhPh43ovyOp3DCWgY3h8nWmdGQ= github.com/google/go-containerregistry/pkg/authn/k8schain v0.0.0-20220120151853-ac864e57b117 h1:bRrDPmm+4eFXtlwBa63SONIL/21QUdWi//hBcUaLZiE= github.com/google/go-containerregistry/pkg/authn/k8schain v0.0.0-20220120151853-ac864e57b117/go.mod h1:BH7pLQnIZhfVpL7cRyWhvvz1bZLY9V45/HvXVh5UMDY= github.com/google/go-containerregistry/pkg/authn/kubernetes v0.0.0-20220110151055-a61fd0a8e2bb/go.mod h1:SK4EqntTk6tHEyNngoqHUwjjZaW6mfzLukei4+cbvu8= @@ -911,8 +913,9 @@ github.com/klauspost/compress v1.12.3/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8 github.com/klauspost/compress v1.13.0/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg= github.com/klauspost/compress v1.13.4/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg= github.com/klauspost/compress v1.13.5/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk= -github.com/klauspost/compress v1.13.6 h1:P76CopJELS0TiO2mebmnzgWaajssP/EszplttgQxcgc= github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk= +github.com/klauspost/compress v1.14.2 h1:S0OHlFk/Gbon/yauFJ4FfJJF5V0fc5HbBTJazi28pRw= +github.com/klauspost/compress v1.14.2/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= @@ -1089,8 +1092,9 @@ github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3I github.com/opencontainers/image-spec v1.0.0/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0= github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0= github.com/opencontainers/image-spec v1.0.2-0.20211117181255-693428a734f5/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0= -github.com/opencontainers/image-spec v1.0.3-0.20211202222133-eacdcc10569b h1:0kCLoY3q1n4zDPYBdGhE/kdcyLWl/aAQmJFQrCPNJ6k= -github.com/opencontainers/image-spec v1.0.3-0.20211202222133-eacdcc10569b/go.mod h1:j4h1pJW6ZcJTgMZWP3+7RlG3zTaP02aDZ/Qw0sppK7Q= +github.com/opencontainers/image-spec v1.0.2/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0= +github.com/opencontainers/image-spec v1.0.3-0.20220114050600-8b9d41f48198 h1:+czc/J8SlhPKLOtVLMQc+xDCFBT73ZStMsRhSsUhsSg= +github.com/opencontainers/image-spec v1.0.3-0.20220114050600-8b9d41f48198/go.mod h1:j4h1pJW6ZcJTgMZWP3+7RlG3zTaP02aDZ/Qw0sppK7Q= github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U= github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U= github.com/opencontainers/runc v1.0.0-rc8.0.20190926000215-3e425f80a8c9/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U= @@ -1583,8 +1587,8 @@ golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qx golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20211209124913-491a49abca63/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20211216030914-fe4d6282115f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.0.0-20220114011407-0dd24b26b47d h1:1n1fc535VhN8SYtD4cDUyNlfpAF2ROMM9+11equK3hs= -golang.org/x/net v0.0.0-20220114011407-0dd24b26b47d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= +golang.org/x/net v0.0.0-20220127074510-2fabfed7e28f h1:o66Bv9+w/vuk7Krcig9jZqD01FP7BL8OliFqqw0xzPI= +golang.org/x/net v0.0.0-20220127074510-2fabfed7e28f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -2011,8 +2015,9 @@ google.golang.org/genproto v0.0.0-20211206160659-862468c7d6e0/go.mod h1:5CzLGKJ6 google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= google.golang.org/genproto v0.0.0-20211221195035-429b39de9b1c/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= google.golang.org/genproto v0.0.0-20220107163113-42d7afdf6368/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20220111164026-67b88f271998 h1:g/x+MYjJYDEP3OBCYYmwIbt4x6k3gryb+ohyOR7PXfI= google.golang.org/genproto v0.0.0-20220111164026-67b88f271998/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= +google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350 h1:YxHp5zqIcAShDEvRr5/0rVESVS+njYF68PSdazrNLJo= +google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= google.golang.org/grpc v1.8.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= @@ -2048,8 +2053,9 @@ google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9K google.golang.org/grpc v1.40.1/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34= google.golang.org/grpc v1.41.0/go.mod h1:U3l9uK9J0sini8mHphKoXyaqDA/8VyGnDee1zzIUK6k= google.golang.org/grpc v1.42.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU= -google.golang.org/grpc v1.43.0 h1:Eeu7bZtDZ2DpRCsLhUlcrLnvYaMK1Gz86a+hMVvELmM= google.golang.org/grpc v1.43.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU= +google.golang.org/grpc v1.44.0 h1:weqSxi/TMs1SqFRMHCtBgXRs8k3X39QIDEZ0pRcttUg= +google.golang.org/grpc v1.44.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU= google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw= google.golang.org/grpc/examples v0.0.0-20201130180447-c456688b1860/go.mod h1:Ly7ZA/ARzg8fnPU9TyZIxoz33sEUuWX7txiqs8lPTgE= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= diff --git a/hack/setup-temporary-gopath.sh b/hack/setup-temporary-gopath.sh index 39ddb8e9d7b..fd7724b54a2 100755 --- a/hack/setup-temporary-gopath.sh +++ b/hack/setup-temporary-gopath.sh @@ -13,13 +13,16 @@ function shim_gopath() { local TEMP_PIPELINE="${TEMP_TEKTONCD}/pipeline" local NEEDS_MOVE=1 + # Checks if GOPATH exists without triggering nounset panic. + EXISTING_GOPATH=${GOPATH:-} + # Check if repo is in GOPATH already and return early if so. # Unfortunately this doesn't respect a repo that's symlinked into # GOPATH and will create a temporary anyway. I couldn't figure out # a way to get the absolute path to the symlinked repo root. - if [ ! -z $GOPATH ] ; then + if [ ! -z $EXISTING_GOPATH ] ; then case $REPO_DIR/ in - $GOPATH/*) NEEDS_MOVE=0;; + $EXISTING_GOPATH/*) NEEDS_MOVE=0;; *) NEEDS_MOVE=1;; esac fi diff --git a/pkg/apis/config/feature_flags.go b/pkg/apis/config/feature_flags.go index 39a8669f38e..950fb43cc8a 100644 --- a/pkg/apis/config/feature_flags.go +++ b/pkg/apis/config/feature_flags.go @@ -30,10 +30,6 @@ const ( StableAPIFields = "stable" // AlphaAPIFields is the value used for "enable-api-fields" when alpha APIs should be usable as well. AlphaAPIFields = "alpha" - // DefaultDisableHomeEnvOverwrite is the default value for "disable-home-env-overwrite". - DefaultDisableHomeEnvOverwrite = true - // DefaultDisableWorkingDirOverwrite is the default value for "disable-working-directory-overwrite". - DefaultDisableWorkingDirOverwrite = true // DefaultDisableAffinityAssistant is the default value for "disable-affinity-assistant". DefaultDisableAffinityAssistant = false // DefaultDisableCredsInit is the default value for "disable-creds-init". @@ -47,14 +43,12 @@ const ( // DefaultEnableCustomTasks is the default value for "enable-custom-tasks". DefaultEnableCustomTasks = false // DefaultScopeWhenExpressionsToTask is the default value for "scope-when-expressions-to-task". - DefaultScopeWhenExpressionsToTask = false + DefaultScopeWhenExpressionsToTask = true // DefaultEnableAPIFields is the default value for "enable-api-fields". DefaultEnableAPIFields = StableAPIFields // DefaultEnableSpire is the default value for "enable-sire". DefaultEnableSpire = false - disableHomeEnvOverwriteKey = "disable-home-env-overwrite" - disableWorkingDirOverwriteKey = "disable-working-directory-overwrite" disableAffinityAssistantKey = "disable-affinity-assistant" disableCredsInitKey = "disable-creds-init" runningInEnvWithInjectedSidecarsKey = "running-in-environment-with-injected-sidecars" @@ -69,8 +63,6 @@ const ( // FeatureFlags holds the features configurations // +k8s:deepcopy-gen=true type FeatureFlags struct { - DisableHomeEnvOverwrite bool - DisableWorkingDirOverwrite bool DisableAffinityAssistant bool DisableCredsInit bool RunningInEnvWithInjectedSidecars bool @@ -107,12 +99,6 @@ func NewFeatureFlagsFromMap(cfgMap map[string]string) (*FeatureFlags, error) { } tc := FeatureFlags{} - if err := setFeature(disableHomeEnvOverwriteKey, DefaultDisableHomeEnvOverwrite, &tc.DisableHomeEnvOverwrite); err != nil { - return nil, err - } - if err := setFeature(disableWorkingDirOverwriteKey, DefaultDisableWorkingDirOverwrite, &tc.DisableWorkingDirOverwrite); err != nil { - return nil, err - } if err := setFeature(disableAffinityAssistantKey, DefaultDisableAffinityAssistant, &tc.DisableAffinityAssistant); err != nil { return nil, err } diff --git a/pkg/apis/config/feature_flags_test.go b/pkg/apis/config/feature_flags_test.go index 6fb7909e6b4..e7737cc8650 100644 --- a/pkg/apis/config/feature_flags_test.go +++ b/pkg/apis/config/feature_flags_test.go @@ -35,8 +35,6 @@ func TestNewFeatureFlagsFromConfigMap(t *testing.T) { testCases := []testCase{ { expectedConfig: &config.FeatureFlags{ - DisableHomeEnvOverwrite: false, - DisableWorkingDirOverwrite: false, RunningInEnvWithInjectedSidecars: config.DefaultRunningInEnvWithInjectedSidecars, ScopeWhenExpressionsToTask: config.DefaultScopeWhenExpressionsToTask, EnableAPIFields: "stable", @@ -45,8 +43,6 @@ func TestNewFeatureFlagsFromConfigMap(t *testing.T) { }, { expectedConfig: &config.FeatureFlags{ - DisableHomeEnvOverwrite: true, - DisableWorkingDirOverwrite: true, DisableAffinityAssistant: true, RunningInEnvWithInjectedSidecars: false, RequireGitSSHSecretKnownHosts: true, @@ -65,8 +61,6 @@ func TestNewFeatureFlagsFromConfigMap(t *testing.T) { EnableTektonOCIBundles: true, EnableCustomTasks: true, - DisableHomeEnvOverwrite: true, - DisableWorkingDirOverwrite: true, RunningInEnvWithInjectedSidecars: config.DefaultRunningInEnvWithInjectedSidecars, ScopeWhenExpressionsToTask: config.DefaultScopeWhenExpressionsToTask, }, @@ -78,8 +72,6 @@ func TestNewFeatureFlagsFromConfigMap(t *testing.T) { EnableTektonOCIBundles: true, EnableCustomTasks: true, - DisableHomeEnvOverwrite: true, - DisableWorkingDirOverwrite: true, RunningInEnvWithInjectedSidecars: config.DefaultRunningInEnvWithInjectedSidecars, ScopeWhenExpressionsToTask: config.DefaultScopeWhenExpressionsToTask, }, @@ -99,8 +91,6 @@ func TestNewFeatureFlagsFromConfigMap(t *testing.T) { func TestNewFeatureFlagsFromEmptyConfigMap(t *testing.T) { FeatureFlagsConfigEmptyName := "feature-flags-empty" expectedConfig := &config.FeatureFlags{ - DisableHomeEnvOverwrite: true, - DisableWorkingDirOverwrite: true, RunningInEnvWithInjectedSidecars: true, ScopeWhenExpressionsToTask: config.DefaultScopeWhenExpressionsToTask, EnableAPIFields: "stable", diff --git a/pkg/apis/config/testdata/feature-flags-all-flags-set.yaml b/pkg/apis/config/testdata/feature-flags-all-flags-set.yaml index 6e5e4958e51..817c9899a0f 100644 --- a/pkg/apis/config/testdata/feature-flags-all-flags-set.yaml +++ b/pkg/apis/config/testdata/feature-flags-all-flags-set.yaml @@ -18,8 +18,6 @@ metadata: name: feature-flags namespace: tekton-pipelines data: - disable-home-env-overwrite: "true" - disable-working-directory-overwrite: "true" disable-affinity-assistant: "true" running-in-environment-with-injected-sidecars: "false" require-git-ssh-secret-known-hosts: "true" diff --git a/pkg/apis/config/testdata/feature-flags.yaml b/pkg/apis/config/testdata/feature-flags.yaml index 3e033b0c7f3..e3de04caae7 100644 --- a/pkg/apis/config/testdata/feature-flags.yaml +++ b/pkg/apis/config/testdata/feature-flags.yaml @@ -18,8 +18,6 @@ metadata: name: feature-flags namespace: tekton-pipelines data: - disable-home-env-overwrite: "false" - disable-working-directory-overwrite: "false" disable-affinity-assistant: "false" running-in-environment-with-injected-sidecars: "true" require-git-ssh-secret-known-hosts: "false" diff --git a/pkg/apis/pipeline/v1beta1/openapi_generated.go b/pkg/apis/pipeline/v1beta1/openapi_generated.go index f28b7c6934f..5fd67185b6c 100644 --- a/pkg/apis/pipeline/v1beta1/openapi_generated.go +++ b/pkg/apis/pipeline/v1beta1/openapi_generated.go @@ -93,9 +93,11 @@ func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenA "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunOutputs": schema_pkg_apis_pipeline_v1beta1_TaskRunOutputs(ref), "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunResources": schema_pkg_apis_pipeline_v1beta1_TaskRunResources(ref), "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunResult": schema_pkg_apis_pipeline_v1beta1_TaskRunResult(ref), + "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunSidecarOverride": schema_pkg_apis_pipeline_v1beta1_TaskRunSidecarOverride(ref), "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunSpec": schema_pkg_apis_pipeline_v1beta1_TaskRunSpec(ref), "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunStatus": schema_pkg_apis_pipeline_v1beta1_TaskRunStatus(ref), "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunStatusFields": schema_pkg_apis_pipeline_v1beta1_TaskRunStatusFields(ref), + "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunStepOverride": schema_pkg_apis_pipeline_v1beta1_TaskRunStepOverride(ref), "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskSpec": schema_pkg_apis_pipeline_v1beta1_TaskSpec(ref), "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TimeoutFields": schema_pkg_apis_pipeline_v1beta1_TimeoutFields(ref), "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.WhenExpression": schema_pkg_apis_pipeline_v1beta1_WhenExpression(ref), @@ -2373,11 +2375,37 @@ func schema_pkg_apis_pipeline_v1beta1_PipelineTaskRunSpec(ref common.ReferenceCa Ref: ref("github.com/tektoncd/pipeline/pkg/apis/pipeline/pod.Template"), }, }, + "stepOverrides": { + SchemaProps: spec.SchemaProps{ + Type: []string{"array"}, + Items: &spec.SchemaOrArray{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Default: map[string]interface{}{}, + Ref: ref("github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunStepOverride"), + }, + }, + }, + }, + }, + "sidecarOverrides": { + SchemaProps: spec.SchemaProps{ + Type: []string{"array"}, + Items: &spec.SchemaOrArray{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Default: map[string]interface{}{}, + Ref: ref("github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunSidecarOverride"), + }, + }, + }, + }, + }, }, }, }, Dependencies: []string{ - "github.com/tektoncd/pipeline/pkg/apis/pipeline/pod.Template"}, + "github.com/tektoncd/pipeline/pkg/apis/pipeline/pod.Template", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunSidecarOverride", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunStepOverride"}, } } @@ -3768,6 +3796,37 @@ func schema_pkg_apis_pipeline_v1beta1_TaskRunResult(ref common.ReferenceCallback } } +func schema_pkg_apis_pipeline_v1beta1_TaskRunSidecarOverride(ref common.ReferenceCallback) common.OpenAPIDefinition { + return common.OpenAPIDefinition{ + Schema: spec.Schema{ + SchemaProps: spec.SchemaProps{ + Description: "TaskRunSidecarOverride is used to override the values of a Sidecar in the corresponding Task.", + Type: []string{"object"}, + Properties: map[string]spec.Schema{ + "Name": { + SchemaProps: spec.SchemaProps{ + Description: "The name of the Sidecar to override.", + Default: "", + Type: []string{"string"}, + Format: "", + }, + }, + "Resources": { + SchemaProps: spec.SchemaProps{ + Description: "The resource requirements to apply to the Sidecar.", + Default: map[string]interface{}{}, + Ref: ref("k8s.io/api/core/v1.ResourceRequirements"), + }, + }, + }, + Required: []string{"Name", "Resources"}, + }, + }, + Dependencies: []string{ + "k8s.io/api/core/v1.ResourceRequirements"}, + } +} + func schema_pkg_apis_pipeline_v1beta1_TaskRunSpec(ref common.ReferenceCallback) common.OpenAPIDefinition { return common.OpenAPIDefinition{ Schema: spec.Schema{ @@ -3849,11 +3908,39 @@ func schema_pkg_apis_pipeline_v1beta1_TaskRunSpec(ref common.ReferenceCallback) }, }, }, + "stepOverrides": { + SchemaProps: spec.SchemaProps{ + Description: "Overrides to apply to Steps in this TaskRun. If a field is specified in both a Step and a StepOverride, the value from the StepOverride will be used. This field is only supported when the alpha feature gate is enabled.", + Type: []string{"array"}, + Items: &spec.SchemaOrArray{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Default: map[string]interface{}{}, + Ref: ref("github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunStepOverride"), + }, + }, + }, + }, + }, + "sidecarOverrides": { + SchemaProps: spec.SchemaProps{ + Description: "Overrides to apply to Sidecars in this TaskRun. If a field is specified in both a Sidecar and a SidecarOverride, the value from the SidecarOverride will be used. This field is only supported when the alpha feature gate is enabled.", + Type: []string{"array"}, + Items: &spec.SchemaOrArray{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Default: map[string]interface{}{}, + Ref: ref("github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunSidecarOverride"), + }, + }, + }, + }, + }, }, }, }, Dependencies: []string{ - "github.com/tektoncd/pipeline/pkg/apis/pipeline/pod.Template", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.Param", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRef", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunDebug", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunResources", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskSpec", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.WorkspaceBinding", "k8s.io/apimachinery/pkg/apis/meta/v1.Duration"}, + "github.com/tektoncd/pipeline/pkg/apis/pipeline/pod.Template", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.Param", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRef", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunDebug", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunResources", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunSidecarOverride", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskRunStepOverride", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.TaskSpec", "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.WorkspaceBinding", "k8s.io/apimachinery/pkg/apis/meta/v1.Duration"}, } } @@ -4152,6 +4239,37 @@ func schema_pkg_apis_pipeline_v1beta1_TaskRunStatusFields(ref common.ReferenceCa } } +func schema_pkg_apis_pipeline_v1beta1_TaskRunStepOverride(ref common.ReferenceCallback) common.OpenAPIDefinition { + return common.OpenAPIDefinition{ + Schema: spec.Schema{ + SchemaProps: spec.SchemaProps{ + Description: "TaskRunStepOverride is used to override the values of a Step in the corresponding Task.", + Type: []string{"object"}, + Properties: map[string]spec.Schema{ + "Name": { + SchemaProps: spec.SchemaProps{ + Description: "The name of the Step to override.", + Default: "", + Type: []string{"string"}, + Format: "", + }, + }, + "Resources": { + SchemaProps: spec.SchemaProps{ + Description: "The resource requirements to apply to the Step.", + Default: map[string]interface{}{}, + Ref: ref("k8s.io/api/core/v1.ResourceRequirements"), + }, + }, + }, + Required: []string{"Name", "Resources"}, + }, + }, + Dependencies: []string{ + "k8s.io/api/core/v1.ResourceRequirements"}, + } +} + func schema_pkg_apis_pipeline_v1beta1_TaskSpec(ref common.ReferenceCallback) common.OpenAPIDefinition { return common.OpenAPIDefinition{ Schema: spec.Schema{ diff --git a/pkg/apis/pipeline/v1beta1/pipelinerun_types.go b/pkg/apis/pipeline/v1beta1/pipelinerun_types.go index fd42c322ade..eecc8f59496 100644 --- a/pkg/apis/pipeline/v1beta1/pipelinerun_types.go +++ b/pkg/apis/pipeline/v1beta1/pipelinerun_types.go @@ -511,9 +511,11 @@ type PipelineTaskRun struct { // PipelineTaskRunSpec can be used to configure specific // specs for a concrete Task type PipelineTaskRunSpec struct { - PipelineTaskName string `json:"pipelineTaskName,omitempty"` - TaskServiceAccountName string `json:"taskServiceAccountName,omitempty"` - TaskPodTemplate *PodTemplate `json:"taskPodTemplate,omitempty"` + PipelineTaskName string `json:"pipelineTaskName,omitempty"` + TaskServiceAccountName string `json:"taskServiceAccountName,omitempty"` + TaskPodTemplate *PodTemplate `json:"taskPodTemplate,omitempty"` + StepOverrides []TaskRunStepOverride `json:"stepOverrides,omitempty"` + SidecarOverrides []TaskRunSidecarOverride `json:"sidecarOverrides,omitempty"` } // GetTaskRunSpec returns the task specific spec for a given diff --git a/pkg/apis/pipeline/v1beta1/pipelinerun_validation.go b/pkg/apis/pipeline/v1beta1/pipelinerun_validation.go index 11f2cfbfeb5..715384fee2e 100644 --- a/pkg/apis/pipeline/v1beta1/pipelinerun_validation.go +++ b/pkg/apis/pipeline/v1beta1/pipelinerun_validation.go @@ -112,6 +112,10 @@ func (ps *PipelineRunSpec) Validate(ctx context.Context) (errs *apis.FieldError) } } + for idx, trs := range ps.TaskRunSpecs { + errs = errs.Also(validateTaskRunSpec(ctx, trs).ViaIndex(idx).ViaField("taskRunSpecs")) + } + return errs } @@ -188,3 +192,23 @@ func (ps *PipelineRunSpec) validatePipelineTimeout(timeout time.Duration, errorM } return errs } + +func validateTaskRunSpec(ctx context.Context, trs PipelineTaskRunSpec) (errs *apis.FieldError) { + cfg := config.FromContextOrDefaults(ctx) + if cfg.FeatureFlags.EnableAPIFields == config.AlphaAPIFields { + if trs.StepOverrides != nil { + errs = errs.Also(validateStepOverrides(trs.StepOverrides).ViaField("stepOverrides")) + } + if trs.SidecarOverrides != nil { + errs = errs.Also(validateSidecarOverrides(trs.SidecarOverrides).ViaField("sidecarOverrides")) + } + } else { + if trs.StepOverrides != nil { + errs = errs.Also(apis.ErrDisallowedFields("stepOverrides")) + } + if trs.SidecarOverrides != nil { + errs = errs.Also(apis.ErrDisallowedFields("sidecarOverrides")) + } + } + return errs +} diff --git a/pkg/apis/pipeline/v1beta1/pipelinerun_validation_test.go b/pkg/apis/pipeline/v1beta1/pipelinerun_validation_test.go index 54b74fa17e8..ee4a785f590 100644 --- a/pkg/apis/pipeline/v1beta1/pipelinerun_validation_test.go +++ b/pkg/apis/pipeline/v1beta1/pipelinerun_validation_test.go @@ -26,6 +26,7 @@ import ( "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1" "github.com/tektoncd/pipeline/test/diff" corev1 "k8s.io/api/core/v1" + corev1resources "k8s.io/apimachinery/pkg/api/resource" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "knative.dev/pkg/apis" logtesting "knative.dev/pkg/logging/testing" @@ -380,6 +381,34 @@ func TestPipelineRun_Validate(t *testing.T) { }, }, wc: enableAlphaAPIFields, + }, { + name: "alpha feature: sidecar and step overrides", + pr: v1beta1.PipelineRun{ + ObjectMeta: metav1.ObjectMeta{ + Name: "pr", + }, + Spec: v1beta1.PipelineRunSpec{ + PipelineRef: &v1beta1.PipelineRef{Name: "pr"}, + TaskRunSpecs: []v1beta1.PipelineTaskRunSpec{ + { + PipelineTaskName: "bar", + StepOverrides: []v1beta1.TaskRunStepOverride{{ + Name: "task-1", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }}, + }, + SidecarOverrides: []v1beta1.TaskRunSidecarOverride{{ + Name: "task-1", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }}, + }, + }, + }, + }, + }, + wc: enableAlphaAPIFields, }} for _, ts := range tests { @@ -517,6 +546,104 @@ func TestPipelineRunSpec_Invalidate(t *testing.T) { }, wantErr: apis.ErrMultipleOneOf("bundle", "resolver").ViaField("pipelineRef"), withContext: enableAlphaAPIFields, + }, { + name: "duplicate stepOverride names", + spec: v1beta1.PipelineRunSpec{ + PipelineRef: &v1beta1.PipelineRef{Name: "foo"}, + TaskRunSpecs: []v1beta1.PipelineTaskRunSpec{ + { + PipelineTaskName: "bar", + StepOverrides: []v1beta1.TaskRunStepOverride{ + {Name: "baz"}, {Name: "baz"}, + }, + }, + }, + }, + wantErr: apis.ErrMultipleOneOf("taskRunSpecs[0].stepOverrides[1].name"), + withContext: enableAlphaAPIFields, + }, { + name: "stepOverride disallowed without alpha feature gate", + spec: v1beta1.PipelineRunSpec{ + PipelineRef: &v1beta1.PipelineRef{Name: "foo"}, + TaskRunSpecs: []v1beta1.PipelineTaskRunSpec{ + { + PipelineTaskName: "bar", + StepOverrides: []v1beta1.TaskRunStepOverride{{ + Name: "task-1", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }}, + }, + }, + }, + }, + wantErr: apis.ErrDisallowedFields("stepOverrides").ViaIndex(0).ViaField("taskRunSpecs"), + }, { + name: "sidecarOverride disallowed without alpha feature gate", + spec: v1beta1.PipelineRunSpec{ + PipelineRef: &v1beta1.PipelineRef{Name: "foo"}, + TaskRunSpecs: []v1beta1.PipelineTaskRunSpec{ + { + PipelineTaskName: "bar", + SidecarOverrides: []v1beta1.TaskRunSidecarOverride{{ + Name: "task-1", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }}, + }, + }, + }, + }, + wantErr: apis.ErrDisallowedFields("sidecarOverrides").ViaIndex(0).ViaField("taskRunSpecs"), + }, { + name: "missing stepOverride name", + spec: v1beta1.PipelineRunSpec{ + PipelineRef: &v1beta1.PipelineRef{Name: "foo"}, + TaskRunSpecs: []v1beta1.PipelineTaskRunSpec{ + { + PipelineTaskName: "bar", + StepOverrides: []v1beta1.TaskRunStepOverride{{ + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }}, + }, + }, + }, + }, + wantErr: apis.ErrMissingField("taskRunSpecs[0].stepOverrides[0].name"), + withContext: enableAlphaAPIFields, + }, { + name: "duplicate sidecarOverride names", + spec: v1beta1.PipelineRunSpec{ + PipelineRef: &v1beta1.PipelineRef{Name: "foo"}, + TaskRunSpecs: []v1beta1.PipelineTaskRunSpec{ + { + PipelineTaskName: "bar", + SidecarOverrides: []v1beta1.TaskRunSidecarOverride{ + {Name: "baz"}, {Name: "baz"}, + }, + }, + }, + }, + wantErr: apis.ErrMultipleOneOf("taskRunSpecs[0].sidecarOverrides[1].name"), + withContext: enableAlphaAPIFields, + }, { + name: "missing sidecarOverride name", + spec: v1beta1.PipelineRunSpec{ + PipelineRef: &v1beta1.PipelineRef{Name: "foo"}, + TaskRunSpecs: []v1beta1.PipelineTaskRunSpec{ + { + PipelineTaskName: "bar", + SidecarOverrides: []v1beta1.TaskRunSidecarOverride{{ + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }}, + }, + }, + }, + }, + wantErr: apis.ErrMissingField("taskRunSpecs[0].sidecarOverrides[0].name"), + withContext: enableAlphaAPIFields, }} for _, ps := range tests { t.Run(ps.name, func(t *testing.T) { diff --git a/pkg/apis/pipeline/v1beta1/swagger.json b/pkg/apis/pipeline/v1beta1/swagger.json index dd69ed5807c..209fae22dff 100644 --- a/pkg/apis/pipeline/v1beta1/swagger.json +++ b/pkg/apis/pipeline/v1beta1/swagger.json @@ -1441,6 +1441,20 @@ "pipelineTaskName": { "type": "string" }, + "sidecarOverrides": { + "type": "array", + "items": { + "default": {}, + "$ref": "#/definitions/v1beta1.TaskRunSidecarOverride" + } + }, + "stepOverrides": { + "type": "array", + "items": { + "default": {}, + "$ref": "#/definitions/v1beta1.TaskRunStepOverride" + } + }, "taskPodTemplate": { "$ref": "#/definitions/pod.Template" }, @@ -2229,6 +2243,26 @@ } } }, + "v1beta1.TaskRunSidecarOverride": { + "description": "TaskRunSidecarOverride is used to override the values of a Sidecar in the corresponding Task.", + "type": "object", + "required": [ + "Name", + "Resources" + ], + "properties": { + "Name": { + "description": "The name of the Sidecar to override.", + "type": "string", + "default": "" + }, + "Resources": { + "description": "The resource requirements to apply to the Sidecar.", + "default": {}, + "$ref": "#/definitions/v1.ResourceRequirements" + } + } + }, "v1beta1.TaskRunSpec": { "description": "TaskRunSpec defines the desired state of TaskRun", "type": "object", @@ -2254,10 +2288,26 @@ "type": "string", "default": "" }, + "sidecarOverrides": { + "description": "Overrides to apply to Sidecars in this TaskRun. If a field is specified in both a Sidecar and a SidecarOverride, the value from the SidecarOverride will be used. This field is only supported when the alpha feature gate is enabled.", + "type": "array", + "items": { + "default": {}, + "$ref": "#/definitions/v1beta1.TaskRunSidecarOverride" + } + }, "status": { "description": "Used for cancelling a taskrun (and maybe more later on)", "type": "string" }, + "stepOverrides": { + "description": "Overrides to apply to Steps in this TaskRun. If a field is specified in both a Step and a StepOverride, the value from the StepOverride will be used. This field is only supported when the alpha feature gate is enabled.", + "type": "array", + "items": { + "default": {}, + "$ref": "#/definitions/v1beta1.TaskRunStepOverride" + } + }, "taskRef": { "description": "no more than one of the TaskRef and TaskSpec may be specified.", "$ref": "#/definitions/v1beta1.TaskRef" @@ -2450,6 +2500,26 @@ } } }, + "v1beta1.TaskRunStepOverride": { + "description": "TaskRunStepOverride is used to override the values of a Step in the corresponding Task.", + "type": "object", + "required": [ + "Name", + "Resources" + ], + "properties": { + "Name": { + "description": "The name of the Step to override.", + "type": "string", + "default": "" + }, + "Resources": { + "description": "The resource requirements to apply to the Step.", + "default": {}, + "$ref": "#/definitions/v1.ResourceRequirements" + } + } + }, "v1beta1.TaskSpec": { "description": "TaskSpec defines the desired state of Task.", "type": "object", diff --git a/pkg/apis/pipeline/v1beta1/taskrun_types.go b/pkg/apis/pipeline/v1beta1/taskrun_types.go index e9e63bce44b..517887201cc 100644 --- a/pkg/apis/pipeline/v1beta1/taskrun_types.go +++ b/pkg/apis/pipeline/v1beta1/taskrun_types.go @@ -61,6 +61,18 @@ type TaskRunSpec struct { // Workspaces is a list of WorkspaceBindings from volumes to workspaces. // +optional Workspaces []WorkspaceBinding `json:"workspaces,omitempty"` + // Overrides to apply to Steps in this TaskRun. + // If a field is specified in both a Step and a StepOverride, + // the value from the StepOverride will be used. + // This field is only supported when the alpha feature gate is enabled. + // +optional + StepOverrides []TaskRunStepOverride `json:"stepOverrides,omitempty"` + // Overrides to apply to Sidecars in this TaskRun. + // If a field is specified in both a Sidecar and a SidecarOverride, + // the value from the SidecarOverride will be used. + // This field is only supported when the alpha feature gate is enabled. + // +optional + SidecarOverrides []TaskRunSidecarOverride `json:"sidecarOverrides,omitempty"` } // TaskRunSpecStatus defines the taskrun spec status the user can provide @@ -218,6 +230,22 @@ type TaskRunResult struct { Value string `json:"value"` } +// TaskRunStepOverride is used to override the values of a Step in the corresponding Task. +type TaskRunStepOverride struct { + // The name of the Step to override. + Name string + // The resource requirements to apply to the Step. + Resources corev1.ResourceRequirements +} + +// TaskRunSidecarOverride is used to override the values of a Sidecar in the corresponding Task. +type TaskRunSidecarOverride struct { + // The name of the Sidecar to override. + Name string + // The resource requirements to apply to the Sidecar. + Resources corev1.ResourceRequirements +} + // GetGroupVersionKind implements kmeta.OwnerRefable. func (*TaskRun) GetGroupVersionKind() schema.GroupVersionKind { return SchemeGroupVersion.WithKind(pipeline.TaskRunControllerName) diff --git a/pkg/apis/pipeline/v1beta1/taskrun_validation.go b/pkg/apis/pipeline/v1beta1/taskrun_validation.go index 76d02be5a09..761c9446666 100644 --- a/pkg/apis/pipeline/v1beta1/taskrun_validation.go +++ b/pkg/apis/pipeline/v1beta1/taskrun_validation.go @@ -40,8 +40,6 @@ func (tr *TaskRun) Validate(ctx context.Context) *apis.FieldError { // Validate taskrun spec func (ts *TaskRunSpec) Validate(ctx context.Context) (errs *apis.FieldError) { - cfg := config.FromContextOrDefaults(ctx) - // Must have exactly one of taskRef and taskSpec. if ts.TaskRef == nil && ts.TaskSpec == nil { errs = errs.Also(apis.ErrMissingOneOf("taskRef", "taskSpec")) @@ -61,12 +59,17 @@ func (ts *TaskRunSpec) Validate(ctx context.Context) (errs *apis.FieldError) { errs = errs.Also(validateParameters(ts.Params).ViaField("params")) errs = errs.Also(validateWorkspaceBindings(ctx, ts.Workspaces).ViaField("workspaces")) errs = errs.Also(ts.Resources.Validate(ctx).ViaField("resources")) - if cfg.FeatureFlags.EnableAPIFields == config.AlphaAPIFields { - if ts.Debug != nil { - errs = errs.Also(validateDebug(ts.Debug).ViaField("debug")) - } - } else if ts.Debug != nil { - errs = errs.Also(apis.ErrDisallowedFields("debug")) + if ts.Debug != nil { + errs = errs.Also(ValidateEnabledAPIFields(ctx, "debug", config.AlphaAPIFields).ViaField("debug")) + errs = errs.Also(validateDebug(ts.Debug).ViaField("debug")) + } + if ts.StepOverrides != nil { + errs = errs.Also(ValidateEnabledAPIFields(ctx, "stepOverrides", config.AlphaAPIFields).ViaField("stepOverrides")) + errs = errs.Also(validateStepOverrides(ts.StepOverrides).ViaField("stepOverrides")) + } + if ts.SidecarOverrides != nil { + errs = errs.Also(ValidateEnabledAPIFields(ctx, "sidecarOverrides", config.AlphaAPIFields).ViaField("sidecarOverrides")) + errs = errs.Also(validateSidecarOverrides(ts.SidecarOverrides).ViaField("sidecarOverrides")) } if ts.Status != "" { @@ -100,27 +103,63 @@ func validateDebug(db *TaskRunDebug) (errs *apis.FieldError) { // validateWorkspaceBindings makes sure the volumes provided for the Task's declared workspaces make sense. func validateWorkspaceBindings(ctx context.Context, wb []WorkspaceBinding) (errs *apis.FieldError) { - seen := sets.NewString() + var names []string for idx, w := range wb { - if seen.Has(w.Name) { - errs = errs.Also(apis.ErrMultipleOneOf("name").ViaIndex(idx)) - } - seen.Insert(w.Name) - + names = append(names, w.Name) errs = errs.Also(w.Validate(ctx).ViaIndex(idx)) } - + errs = errs.Also(validateNoDuplicateNames(names, true)) return errs } func validateParameters(params []Param) (errs *apis.FieldError) { - // Template must not duplicate parameter names. - seen := sets.NewString() + var names []string for _, p := range params { - if seen.Has(strings.ToLower(p.Name)) { - errs = errs.Also(apis.ErrMultipleOneOf("name").ViaKey(p.Name)) + names = append(names, p.Name) + } + return validateNoDuplicateNames(names, false) +} + +func validateStepOverrides(overrides []TaskRunStepOverride) (errs *apis.FieldError) { + var names []string + for i, o := range overrides { + if o.Name == "" { + errs = errs.Also(apis.ErrMissingField("name").ViaIndex(i)) + } else { + names = append(names, o.Name) + } + } + errs = errs.Also(validateNoDuplicateNames(names, true)) + return errs +} + +func validateSidecarOverrides(overrides []TaskRunSidecarOverride) (errs *apis.FieldError) { + var names []string + for i, o := range overrides { + if o.Name == "" { + errs = errs.Also(apis.ErrMissingField("name").ViaIndex(i)) + } else { + names = append(names, o.Name) + } + } + errs = errs.Also(validateNoDuplicateNames(names, true)) + return errs +} + +// validateNoDuplicateNames returns an error for each name that is repeated in names. +// Case insensitive. +// If byIndex is true, the error will be reported by index instead of by key. +func validateNoDuplicateNames(names []string, byIndex bool) (errs *apis.FieldError) { + seen := sets.NewString() + for i, n := range names { + if seen.Has(strings.ToLower(n)) { + if byIndex { + errs = errs.Also(apis.ErrMultipleOneOf("name").ViaIndex(i)) + } else { + errs = errs.Also(apis.ErrMultipleOneOf("name").ViaKey(n)) + } } - seen.Insert(p.Name) + seen.Insert(n) } return errs } diff --git a/pkg/apis/pipeline/v1beta1/taskrun_validation_test.go b/pkg/apis/pipeline/v1beta1/taskrun_validation_test.go index 3479bcb8b69..c9e08489df9 100644 --- a/pkg/apis/pipeline/v1beta1/taskrun_validation_test.go +++ b/pkg/apis/pipeline/v1beta1/taskrun_validation_test.go @@ -26,6 +26,7 @@ import ( resource "github.com/tektoncd/pipeline/pkg/apis/resource/v1alpha1" "github.com/tektoncd/pipeline/test/diff" corev1 "k8s.io/api/core/v1" + corev1resources "k8s.io/apimachinery/pkg/api/resource" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "knative.dev/pkg/apis" ) @@ -100,6 +101,27 @@ func TestTaskRun_Validate(t *testing.T) { }, }, wc: enableAlphaAPIFields, + }, { + name: "alpha feature: valid step and sidecar overrides", + taskRun: &v1beta1.TaskRun{ + ObjectMeta: metav1.ObjectMeta{Name: "tr"}, + Spec: v1beta1.TaskRunSpec{ + TaskRef: &v1beta1.TaskRef{Name: "task"}, + StepOverrides: []v1beta1.TaskRunStepOverride{{ + Name: "foo", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }, + }}, + SidecarOverrides: []v1beta1.TaskRunSidecarOverride{{ + Name: "bar", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }, + }}, + }, + }, + wc: enableAlphaAPIFields, }} for _, ts := range tests { t.Run(ts.name, func(t *testing.T) { @@ -273,10 +295,10 @@ func TestTaskRunSpec_Invalidate(t *testing.T) { Name: "my-task", }, Debug: &v1beta1.TaskRunDebug{ - Breakpoint: []string{"bReaKdAnCe"}, + Breakpoint: []string{"onFailure"}, }, }, - wantErr: apis.ErrDisallowedFields("debug"), + wantErr: apis.ErrGeneric("debug requires \"enable-api-fields\" feature gate to be \"alpha\" but it is \"stable\""), }, { name: "invalid breakpoint", spec: v1beta1.TaskRunSpec{ @@ -346,6 +368,94 @@ func TestTaskRunSpec_Invalidate(t *testing.T) { }, wantErr: apis.ErrMultipleOneOf("bundle", "resolver").ViaField("taskRef"), wc: enableAlphaAPIFields, + }, { + name: "stepOverride disallowed without alpha feature gate", + spec: v1beta1.TaskRunSpec{ + TaskRef: &v1beta1.TaskRef{ + Name: "foo", + }, + StepOverrides: []v1beta1.TaskRunStepOverride{{ + Name: "foo", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }, + }}, + }, + wantErr: apis.ErrGeneric("stepOverrides requires \"enable-api-fields\" feature gate to be \"alpha\" but it is \"stable\""), + }, { + name: "sidecarOverride disallowed without alpha feature gate", + spec: v1beta1.TaskRunSpec{ + TaskRef: &v1beta1.TaskRef{ + Name: "foo", + }, + SidecarOverrides: []v1beta1.TaskRunSidecarOverride{{ + Name: "foo", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }, + }}, + }, + wantErr: apis.ErrGeneric("sidecarOverrides requires \"enable-api-fields\" feature gate to be \"alpha\" but it is \"stable\""), + }, { + name: "duplicate stepOverride names", + spec: v1beta1.TaskRunSpec{ + TaskRef: &v1beta1.TaskRef{Name: "task"}, + StepOverrides: []v1beta1.TaskRunStepOverride{{ + Name: "foo", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }, + }, { + Name: "foo", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }, + }}, + }, + wantErr: apis.ErrMultipleOneOf("stepOverrides[1].name"), + wc: enableAlphaAPIFields, + }, { + name: "missing stepOverride names", + spec: v1beta1.TaskRunSpec{ + TaskRef: &v1beta1.TaskRef{Name: "task"}, + StepOverrides: []v1beta1.TaskRunStepOverride{{ + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }, + }}, + }, + wantErr: apis.ErrMissingField("stepOverrides[0].name"), + wc: enableAlphaAPIFields, + }, { + name: "duplicate sidecarOverride names", + spec: v1beta1.TaskRunSpec{ + TaskRef: &v1beta1.TaskRef{Name: "task"}, + SidecarOverrides: []v1beta1.TaskRunSidecarOverride{{ + Name: "bar", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }, + }, { + Name: "bar", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }, + }}, + }, + wantErr: apis.ErrMultipleOneOf("sidecarOverrides[1].name"), + wc: enableAlphaAPIFields, + }, { + name: "missing sidecarOverride names", + spec: v1beta1.TaskRunSpec{ + TaskRef: &v1beta1.TaskRef{Name: "task"}, + SidecarOverrides: []v1beta1.TaskRunSidecarOverride{{ + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{corev1.ResourceMemory: corev1resources.MustParse("1Gi")}, + }, + }}, + }, + wantErr: apis.ErrMissingField("sidecarOverrides[0].name"), + wc: enableAlphaAPIFields, }} for _, ts := range tests { t.Run(ts.name, func(t *testing.T) { diff --git a/pkg/apis/pipeline/v1beta1/zz_generated.deepcopy.go b/pkg/apis/pipeline/v1beta1/zz_generated.deepcopy.go index e72995598c5..d11aa17f9b4 100644 --- a/pkg/apis/pipeline/v1beta1/zz_generated.deepcopy.go +++ b/pkg/apis/pipeline/v1beta1/zz_generated.deepcopy.go @@ -1132,6 +1132,20 @@ func (in *PipelineTaskRunSpec) DeepCopyInto(out *PipelineTaskRunSpec) { *out = new(pod.Template) (*in).DeepCopyInto(*out) } + if in.StepOverrides != nil { + in, out := &in.StepOverrides, &out.StepOverrides + *out = make([]TaskRunStepOverride, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.SidecarOverrides != nil { + in, out := &in.SidecarOverrides, &out.SidecarOverrides + *out = make([]TaskRunSidecarOverride, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } return } @@ -1659,6 +1673,23 @@ func (in *TaskRunResult) DeepCopy() *TaskRunResult { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *TaskRunSidecarOverride) DeepCopyInto(out *TaskRunSidecarOverride) { + *out = *in + in.Resources.DeepCopyInto(&out.Resources) + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TaskRunSidecarOverride. +func (in *TaskRunSidecarOverride) DeepCopy() *TaskRunSidecarOverride { + if in == nil { + return nil + } + out := new(TaskRunSidecarOverride) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *TaskRunSpec) DeepCopyInto(out *TaskRunSpec) { *out = *in @@ -1706,6 +1737,20 @@ func (in *TaskRunSpec) DeepCopyInto(out *TaskRunSpec) { (*in)[i].DeepCopyInto(&(*out)[i]) } } + if in.StepOverrides != nil { + in, out := &in.StepOverrides, &out.StepOverrides + *out = make([]TaskRunStepOverride, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.SidecarOverrides != nil { + in, out := &in.SidecarOverrides, &out.SidecarOverrides + *out = make([]TaskRunSidecarOverride, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } return } @@ -1806,6 +1851,23 @@ func (in *TaskRunStatusFields) DeepCopy() *TaskRunStatusFields { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *TaskRunStepOverride) DeepCopyInto(out *TaskRunStepOverride) { + *out = *in + in.Resources.DeepCopyInto(&out.Resources) + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TaskRunStepOverride. +func (in *TaskRunStepOverride) DeepCopy() *TaskRunStepOverride { + if in == nil { + return nil + } + out := new(TaskRunStepOverride) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *TaskSpec) DeepCopyInto(out *TaskSpec) { *out = *in diff --git a/pkg/internal/deprecated/override.go b/pkg/internal/deprecated/override.go deleted file mode 100644 index 0ed60ccc72e..00000000000 --- a/pkg/internal/deprecated/override.go +++ /dev/null @@ -1,85 +0,0 @@ -/* -Copyright 2021 The Tekton Authors - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package deprecated - -import ( - "context" - - "github.com/tektoncd/pipeline/pkg/apis/config" - "github.com/tektoncd/pipeline/pkg/apis/pipeline" - "github.com/tektoncd/pipeline/pkg/pod" - corev1 "k8s.io/api/core/v1" -) - -// NewOverrideWorkingDirTransformer returns a pod.Transformer that will override the workingDir on pods if needed. -func NewOverrideWorkingDirTransformer(ctx context.Context) pod.Transformer { - return func(p *corev1.Pod) (*corev1.Pod, error) { - if shouldOverrideWorkingDir(ctx) { - for i, c := range p.Spec.Containers { - if pod.IsContainerStep(c.Name) { - if c.WorkingDir == "" { - p.Spec.Containers[i].WorkingDir = pipeline.WorkspaceDir - } - } - } - } - return p, nil - } -} - -// shouldOverrideWorkingDir returns a bool indicating whether a Pod should have its -// working directory overwritten with /workspace or if it should be -// left unmodified. -// -// For further reference see https://github.com/tektoncd/pipeline/issues/1836 -func shouldOverrideWorkingDir(ctx context.Context) bool { - cfg := config.FromContextOrDefaults(ctx) - return !cfg.FeatureFlags.DisableWorkingDirOverwrite -} - -// NewOverrideHomeTransformer returns a pod.Transformer that will override HOME if needed -func NewOverrideHomeTransformer(ctx context.Context) pod.Transformer { - return func(p *corev1.Pod) (*corev1.Pod, error) { - if shouldOverrideHomeEnv(ctx) { - for i, c := range p.Spec.Containers { - hasHomeEnv := false - for _, e := range c.Env { - if e.Name == "HOME" { - hasHomeEnv = true - } - } - if !hasHomeEnv { - p.Spec.Containers[i].Env = append(p.Spec.Containers[i].Env, corev1.EnvVar{ - Name: "HOME", - Value: pipeline.HomeDir, - }) - } - } - } - return p, nil - } -} - -// shouldOverrideHomeEnv returns a bool indicating whether a Pod should have its -// $HOME environment variable overwritten with /tekton/home or if it should be -// left unmodified. -// -// For further reference see https://github.com/tektoncd/pipeline/issues/2013 -func shouldOverrideHomeEnv(ctx context.Context) bool { - cfg := config.FromContextOrDefaults(ctx) - return !cfg.FeatureFlags.DisableHomeEnvOverwrite -} diff --git a/pkg/internal/deprecated/override_test.go b/pkg/internal/deprecated/override_test.go deleted file mode 100644 index ced6404b4ff..00000000000 --- a/pkg/internal/deprecated/override_test.go +++ /dev/null @@ -1,309 +0,0 @@ -/* -Copyright 2021 The Tekton Authors - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ -package deprecated_test - -import ( - "context" - "testing" - - "github.com/google/go-cmp/cmp" - "github.com/tektoncd/pipeline/pkg/apis/config" - "github.com/tektoncd/pipeline/pkg/internal/deprecated" - "github.com/tektoncd/pipeline/test/diff" - corev1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - logtesting "knative.dev/pkg/logging/testing" -) - -const ( - featureFlagDisableHomeEnvKey = "disable-home-env-overwrite" - featureFlagDisableWorkingDirKey = "disable-working-directory-overwrite" -) - -func TestNewOverrideWorkingDirTransformer(t *testing.T) { - - for _, tc := range []struct { - description string - configMap *corev1.ConfigMap - podspec corev1.PodSpec - expected corev1.PodSpec - }{{ - description: "Default behaviour: A missing disable-working-directory-overwrite should mean true, so no overwrite", - configMap: &corev1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{Name: config.GetFeatureFlagsConfigName(), Namespace: "tekton-pipelines"}, - Data: map[string]string{}, - }, - podspec: corev1.PodSpec{ - Containers: []corev1.Container{{ - Name: "step-bar", - Image: "foo", - }, { - Name: "sidecar-bar", - Image: "foo", - }, { - Name: "step-bar-wg", - Image: "foo", - WorkingDir: "/foobar", - }}, - }, - expected: corev1.PodSpec{ - Containers: []corev1.Container{{ - Name: "step-bar", - Image: "foo", - }, { - Name: "sidecar-bar", - Image: "foo", - }, { - Name: "step-bar-wg", - Image: "foo", - WorkingDir: "/foobar", - }}, - }, - }, { - description: "Setting disable-working-directory-overwrite to false should result in we don't disable the behavior, so there should be an overwrite", - configMap: &corev1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{Name: config.GetFeatureFlagsConfigName(), Namespace: "tekton-pipelines"}, - Data: map[string]string{ - featureFlagDisableWorkingDirKey: "false", - }, - }, - podspec: corev1.PodSpec{ - Containers: []corev1.Container{{ - Name: "step-bar", - Image: "foo", - }, { - Name: "sidecar-bar", - Image: "foo", - }, { - Name: "step-bar-wg", - Image: "foo", - WorkingDir: "/foobar", - }}, - }, - expected: corev1.PodSpec{ - Containers: []corev1.Container{{ - Name: "step-bar", - Image: "foo", - WorkingDir: "/workspace", - }, { - Name: "sidecar-bar", - Image: "foo", - }, { - Name: "step-bar-wg", - Image: "foo", - WorkingDir: "/foobar", - }}, - }, - }, { - description: "Setting disable-working-directory-overwrite to true should disable the overwrite, so no overwrite", - configMap: &corev1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{Name: config.GetFeatureFlagsConfigName(), Namespace: "tekton-pipelines"}, - Data: map[string]string{ - featureFlagDisableWorkingDirKey: "true", - }, - }, - podspec: corev1.PodSpec{ - Containers: []corev1.Container{{ - Name: "step-bar", - Image: "foo", - }, { - Name: "sidecar-bar", - Image: "foo", - }, { - Name: "step-bar-wg", - Image: "foo", - WorkingDir: "/foobar", - }}, - }, - expected: corev1.PodSpec{ - Containers: []corev1.Container{{ - Name: "step-bar", - Image: "foo", - }, { - Name: "sidecar-bar", - Image: "foo", - }, { - Name: "step-bar-wg", - Image: "foo", - WorkingDir: "/foobar", - }}, - }, - }} { - t.Run(tc.description, func(t *testing.T) { - store := config.NewStore(logtesting.TestLogger(t)) - store.OnConfigChanged(tc.configMap) - ctx := store.ToContext(context.Background()) - f := deprecated.NewOverrideWorkingDirTransformer(ctx) - got, err := f(&corev1.Pod{Spec: tc.podspec}) - if err != nil { - t.Fatalf("Transformer failed: %v", err) - } - if d := cmp.Diff(tc.expected, got.Spec); d != "" { - t.Errorf("Diff pod: %s", diff.PrintWantGot(d)) - } - }) - } -} - -func TestShouldOverrideHomeEnv(t *testing.T) { - for _, tc := range []struct { - description string - configMap *corev1.ConfigMap - podspec corev1.PodSpec - expected corev1.PodSpec - }{{ - description: "Default behaviour: A missing disable-home-env-overwrite flag should result in no overwrite", - configMap: &corev1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{Name: config.GetFeatureFlagsConfigName(), Namespace: "tekton-pipelines"}, - Data: map[string]string{}, - }, - podspec: corev1.PodSpec{ - Containers: []corev1.Container{{ - Name: "step-bar", - Image: "foo", - Env: []corev1.EnvVar{{ - Name: "HOME", - Value: "/home", - }}, - }, { - Name: "step-baz", - Image: "foo", - Env: []corev1.EnvVar{{ - Name: "FOO", - Value: "bar", - }}, - }}, - }, - expected: corev1.PodSpec{ - Containers: []corev1.Container{{ - Name: "step-bar", - Image: "foo", - Env: []corev1.EnvVar{{ - Name: "HOME", - Value: "/home", - }}, - }, { - Name: "step-baz", - Image: "foo", - Env: []corev1.EnvVar{{ - Name: "FOO", - Value: "bar", - }}, - }}, - }, - }, { - description: "Setting disable-home-env-overwrite to false should result in an overwrite", - configMap: &corev1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{Name: config.GetFeatureFlagsConfigName(), Namespace: "tekton-pipelines"}, - Data: map[string]string{ - featureFlagDisableHomeEnvKey: "false", - }, - }, - podspec: corev1.PodSpec{ - Containers: []corev1.Container{{ - Name: "step-bar", - Image: "foo", - Env: []corev1.EnvVar{{ - Name: "HOME", - Value: "/home", - }}, - }, { - Name: "step-baz", - Image: "foo", - Env: []corev1.EnvVar{{ - Name: "FOO", - Value: "bar", - }}, - }}, - }, - expected: corev1.PodSpec{ - Containers: []corev1.Container{{ - Name: "step-bar", - Image: "foo", - Env: []corev1.EnvVar{{ - Name: "HOME", - Value: "/home", - }}, - }, { - Name: "step-baz", - Image: "foo", - Env: []corev1.EnvVar{{ - Name: "FOO", - Value: "bar", - }, { - Name: "HOME", - Value: "/tekton/home", - }}, - }}, - }, - }, { - description: "Setting disable-home-env-overwrite to true should result in no overwrite", - configMap: &corev1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{Name: config.GetFeatureFlagsConfigName(), Namespace: "tekton-pipelines"}, - Data: map[string]string{ - featureFlagDisableHomeEnvKey: "true", - }, - }, - podspec: corev1.PodSpec{ - Containers: []corev1.Container{{ - Name: "step-bar", - Image: "foo", - Env: []corev1.EnvVar{{ - Name: "HOME", - Value: "/home", - }}, - }, { - Name: "step-baz", - Image: "foo", - Env: []corev1.EnvVar{{ - Name: "FOO", - Value: "bar", - }}, - }}, - }, - expected: corev1.PodSpec{ - Containers: []corev1.Container{{ - Name: "step-bar", - Image: "foo", - Env: []corev1.EnvVar{{ - Name: "HOME", - Value: "/home", - }}, - }, { - Name: "step-baz", - Image: "foo", - Env: []corev1.EnvVar{{ - Name: "FOO", - Value: "bar", - }}, - }}, - }, - }} { - t.Run(tc.description, func(t *testing.T) { - store := config.NewStore(logtesting.TestLogger(t)) - store.OnConfigChanged(tc.configMap) - ctx := store.ToContext(context.Background()) - f := deprecated.NewOverrideHomeTransformer(ctx) - got, err := f(&corev1.Pod{Spec: tc.podspec}) - if err != nil { - t.Fatalf("Transformer failed: %v", err) - } - if d := cmp.Diff(tc.expected, got.Spec); d != "" { - t.Errorf("Diff pod: %s", diff.PrintWantGot(d)) - } - }) - } -} diff --git a/pkg/pod/entrypoint_lookup.go b/pkg/pod/entrypoint_lookup.go index 84db0dfc378..80dba700400 100644 --- a/pkg/pod/entrypoint_lookup.go +++ b/pkg/pod/entrypoint_lookup.go @@ -36,7 +36,7 @@ type EntrypointCache interface { // the reference referred to an index, the returned digest will be the // index's digest, not any platform-specific image contained by the // index. - get(ctx context.Context, ref name.Reference, namespace, serviceAccountName string) (*imageData, error) + get(ctx context.Context, ref name.Reference, namespace, serviceAccountName string, imagePullSecrets []corev1.LocalObjectReference) (*imageData, error) } // imageData contains information looked up about an image or multi-platform image index. @@ -50,7 +50,7 @@ type imageData struct { // // Images that are not specified by digest will be specified by digest after // lookup in the resulting list of containers. -func resolveEntrypoints(ctx context.Context, cache EntrypointCache, namespace, serviceAccountName string, steps []corev1.Container) ([]corev1.Container, error) { +func resolveEntrypoints(ctx context.Context, cache EntrypointCache, namespace, serviceAccountName string, imagePullSecrets []corev1.LocalObjectReference, steps []corev1.Container) ([]corev1.Container, error) { // Keep a local cache of name->imageData lookups, just for the scope of // resolving this set of steps. If the image is pushed to before the // next run, we need to resolve its digest and commands again, but we @@ -72,7 +72,7 @@ func resolveEntrypoints(ctx context.Context, cache EntrypointCache, namespace, s id = cid } else { // Look it up for real. - lid, err := cache.get(ctx, ref, namespace, serviceAccountName) + lid, err := cache.get(ctx, ref, namespace, serviceAccountName, imagePullSecrets) if err != nil { return nil, err } diff --git a/pkg/pod/entrypoint_lookup_impl.go b/pkg/pod/entrypoint_lookup_impl.go index ba618e12018..82c487dca3e 100644 --- a/pkg/pod/entrypoint_lookup_impl.go +++ b/pkg/pod/entrypoint_lookup_impl.go @@ -28,6 +28,7 @@ import ( "github.com/google/go-containerregistry/pkg/v1/remote" lru "github.com/hashicorp/golang-lru" specs "github.com/opencontainers/image-spec/specs-go/v1" + corev1 "k8s.io/api/core/v1" "k8s.io/client-go/kubernetes" ) @@ -56,7 +57,7 @@ func NewEntrypointCache(kubeclient kubernetes.Interface) (EntrypointCache, error // It also returns the digest associated with the given reference. If the // reference referred to an index, the returned digest will be the index's // digest, not any platform-specific image contained by the index. -func (e *entrypointCache) get(ctx context.Context, ref name.Reference, namespace, serviceAccountName string) (*imageData, error) { +func (e *entrypointCache) get(ctx context.Context, ref name.Reference, namespace, serviceAccountName string, imagePullSecrets []corev1.LocalObjectReference) (*imageData, error) { // If image is specified by digest, check the local cache. if digest, ok := ref.(name.Digest); ok { if id, ok := e.lru.Get(digest.String()); ok { @@ -64,10 +65,15 @@ func (e *entrypointCache) get(ctx context.Context, ref name.Reference, namespace } } + pullSecretsNames := make([]string, 0, len(imagePullSecrets)) + for _, ps := range imagePullSecrets { + pullSecretsNames = append(pullSecretsNames, ps.Name) + } // Consult the remote registry, using imagePullSecrets. kc, err := k8schain.New(ctx, e.kubeclient, k8schain.Options{ Namespace: namespace, ServiceAccountName: serviceAccountName, + ImagePullSecrets: pullSecretsNames, }) if err != nil { return nil, fmt.Errorf("error creating k8schain: %v", err) @@ -106,32 +112,10 @@ func (e *entrypointCache) get(ctx context.Context, ref name.Reference, namespace if err != nil { return nil, err } - mf, err := idx.IndexManifest() + id.commands, err = buildCommandMap(idx) if err != nil { return nil, err } - for _, desc := range mf.Manifests { - plat := platforms.Format(specs.Platform{ - OS: desc.Platform.OS, - Architecture: desc.Platform.Architecture, - Variant: desc.Platform.Variant, - // TODO(jasonhall): Figure out how to determine - // osversion from the entrypoint binary, to - // select the right Windows image if multiple - // are provided (e.g., golang). - }) - if _, found := id.commands[plat]; found { - return nil, fmt.Errorf("duplicate image found for platform: %s", plat) - } - img, err := idx.Image(desc.Digest) - if err != nil { - return nil, err - } - id.commands[plat], _, err = imageInfo(img) - if err != nil { - return nil, err - } - } default: return nil, errors.New("unsupported media type for image reference") } @@ -142,6 +126,35 @@ func (e *entrypointCache) get(ctx context.Context, ref name.Reference, namespace return id, nil } +func buildCommandMap(idx v1.ImageIndex) (map[string][]string, error) { + // Map platform strings to digest, to handle some ~malformed images + // that specify the same manifest multiple times. + platToDigest := map[string]v1.Hash{} + + cmds := map[string][]string{} + + mf, err := idx.IndexManifest() + if err != nil { + return nil, err + } + for _, desc := range mf.Manifests { + plat := desc.Platform.String() + if got, found := platToDigest[plat]; found && got != desc.Digest { + return nil, fmt.Errorf("duplicate unique image found for platform: %s: found %s and %s", plat, got, desc.Digest) + } + platToDigest[plat] = desc.Digest + img, err := idx.Image(desc.Digest) + if err != nil { + return nil, err + } + cmds[plat], _, err = imageInfo(img) + if err != nil { + return nil, err + } + } + return cmds, nil +} + func imageInfo(img v1.Image) (cmd []string, platform string, err error) { cf, err := img.ConfigFile() if err != nil { diff --git a/pkg/pod/entrypoint_lookup_impl_test.go b/pkg/pod/entrypoint_lookup_impl_test.go new file mode 100644 index 00000000000..0b0972c4bf0 --- /dev/null +++ b/pkg/pod/entrypoint_lookup_impl_test.go @@ -0,0 +1,264 @@ +/* +Copyright 2022 The Tekton Authors + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package pod + +import ( + "context" + "encoding/base64" + "fmt" + "net/http" + "net/http/httptest" + "net/url" + "strings" + "testing" + + "github.com/google/go-containerregistry/pkg/name" + "github.com/google/go-containerregistry/pkg/registry" + v1 "github.com/google/go-containerregistry/pkg/v1" + "github.com/google/go-containerregistry/pkg/v1/empty" + "github.com/google/go-containerregistry/pkg/v1/mutate" + "github.com/google/go-containerregistry/pkg/v1/random" + "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1" + remotetest "github.com/tektoncd/pipeline/test" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + fakeclient "k8s.io/client-go/kubernetes/fake" +) + +const ( + username = "foo" + password = "bar" + imagePullSecretsName = "secret" + nameSpace = "ns" +) + +type fakeHTTP struct { + reg http.Handler +} + +func newfakeHTTP() fakeHTTP { + reg := registry.New() + return fakeHTTP{ + reg: reg, + } +} + +func (f *fakeHTTP) ServeHTTP(w http.ResponseWriter, r *http.Request) { + // Request authentication for ping request. + // For further reference see https://docs.docker.com/registry/spec/api/#api-version-check. + if r.URL.Path == "/v2/" && r.Method == http.MethodGet { + w.Header().Add("WWW-Authenticate", "basic") + w.WriteHeader(http.StatusUnauthorized) + return + } + // Check auth if we've fetching the image. + if strings.HasPrefix(r.URL.Path, "/v2/task") && r.Method == "GET" { + u, p, ok := r.BasicAuth() + if !ok || username != u || password != p { + w.WriteHeader(http.StatusUnauthorized) + return + } + } + // Default to open. + f.reg.ServeHTTP(w, r) +} + +func generateSecret(host string, username string, password string) *corev1.Secret { + return &corev1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: imagePullSecretsName, + Namespace: nameSpace, + }, + Type: corev1.SecretTypeDockercfg, + Data: map[string][]byte{ + corev1.DockerConfigKey: []byte( + fmt.Sprintf(`{%q: {"auth": %q}}`, + host, + base64.StdEncoding.EncodeToString([]byte(username+":"+password)), + ), + ), + }, + } +} + +func TestGetImageWithImagePullSecrets(t *testing.T) { + ctx := context.Background() + ctx, cancel := context.WithCancel(ctx) + defer cancel() + + ftp := newfakeHTTP() + s := httptest.NewServer(&ftp) + defer s.Close() + + u, err := url.Parse(s.URL) + if err != nil { + t.Errorf("Parsing url with an error: %v", err) + } + + task := &v1beta1.Task{ + TypeMeta: metav1.TypeMeta{ + APIVersion: "tekton.dev/v1beta1", + Kind: "Task"}, + ObjectMeta: metav1.ObjectMeta{ + Name: "test-create-image"}, + } + + ref, err := remotetest.CreateImageWithAnnotations(u.Host+"/task/test-create-image", remotetest.DefaultObjectAnnotationMapper, task) + if err != nil { + t.Errorf("uploading image failed unexpectedly with an error: %v", err) + } + + imgRef, err := name.ParseReference(ref) + if err != nil { + t.Errorf("digest %s is not a valid reference: %v", ref, err) + } + + for _, tc := range []struct { + name string + basicSecret *corev1.Secret + imagePullSecrets []corev1.LocalObjectReference + wantErr bool + }{{ + name: "correct secret", + basicSecret: generateSecret(u.Host, username, password), + imagePullSecrets: []corev1.LocalObjectReference{{Name: imagePullSecretsName}}, + wantErr: false, + }, { + name: "unauthorized secret", + basicSecret: generateSecret(u.Host, username, "wrong password"), + imagePullSecrets: []corev1.LocalObjectReference{{Name: imagePullSecretsName}}, + wantErr: true, + }, { + name: "empty secret", + basicSecret: &corev1.Secret{ObjectMeta: metav1.ObjectMeta{Name: "foo"}}, + imagePullSecrets: []corev1.LocalObjectReference{{Name: imagePullSecretsName}}, + wantErr: true, + }, { + name: "no basic secret", + basicSecret: &corev1.Secret{}, + imagePullSecrets: []corev1.LocalObjectReference{{Name: imagePullSecretsName}}, + wantErr: true, + }, { + name: "no imagePullSecrets", + basicSecret: generateSecret(u.Host, username, password), + imagePullSecrets: nil, + wantErr: true, + }} { + t.Run(tc.name, func(t *testing.T) { + client := fakeclient.NewSimpleClientset(&corev1.ServiceAccount{ + ObjectMeta: metav1.ObjectMeta{ + Name: "default", + Namespace: nameSpace, + }, + }, tc.basicSecret) + + entrypointCache, err := NewEntrypointCache(client) + if err != nil { + t.Errorf("Creating entrypointCache with an error: %v", err) + } + + i, err := entrypointCache.get(ctx, imgRef, nameSpace, "", tc.imagePullSecrets) + if (err != nil) != tc.wantErr { + t.Fatalf("get() = %+v, %v, wantErr %t", i, err, tc.wantErr) + } + + }) + + } +} + +func mustRandomImage(t *testing.T) v1.Image { + img, err := random.Image(10, 10) + if err != nil { + t.Fatal(err) + } + return img +} + +func TestBuildCommandMap(t *testing.T) { + img := mustRandomImage(t) + + for _, c := range []struct { + desc string + idx v1.ImageIndex + wantErr bool + }{{ + // Valid multi-platform image even though some platforms only differ by variant or osversion. + desc: "valid index", + idx: mutate.AppendManifests(empty.Index, mutate.IndexAddendum{ + Add: mustRandomImage(t), + Descriptor: v1.Descriptor{ + Platform: &v1.Platform{OS: "linux", Architecture: "amd64"}, + }, + }, mutate.IndexAddendum{ + Add: mustRandomImage(t), + Descriptor: v1.Descriptor{ + Platform: &v1.Platform{OS: "linux", Architecture: "arm64", Variant: "7"}, + }, + }, mutate.IndexAddendum{ + Add: mustRandomImage(t), + Descriptor: v1.Descriptor{ + Platform: &v1.Platform{OS: "linux", Architecture: "arm64", Variant: "8"}, + }, + }, mutate.IndexAddendum{ + Add: mustRandomImage(t), + Descriptor: v1.Descriptor{ + Platform: &v1.Platform{OS: "windows", Architecture: "amd64", OSVersion: "1.2.3"}, + }, + }, mutate.IndexAddendum{ + Add: mustRandomImage(t), + Descriptor: v1.Descriptor{ + Platform: &v1.Platform{OS: "windows", Architecture: "amd64", OSVersion: "4.5.6"}, + }, + }), + }, { + desc: "valid index, with dupes", + idx: mutate.AppendManifests(empty.Index, mutate.IndexAddendum{ + Add: img, + Descriptor: v1.Descriptor{ + Platform: &v1.Platform{OS: "linux", Architecture: "amd64"}, + }, + }, mutate.IndexAddendum{ + Add: img, + Descriptor: v1.Descriptor{ + Platform: &v1.Platform{OS: "linux", Architecture: "amd64"}, + }, + }), + }, { + desc: "invalid index, dupes with different digests", + idx: mutate.AppendManifests(empty.Index, mutate.IndexAddendum{ + Add: mustRandomImage(t), + Descriptor: v1.Descriptor{ + Platform: &v1.Platform{OS: "linux", Architecture: "amd64"}, + }, + }, mutate.IndexAddendum{ + Add: mustRandomImage(t), + Descriptor: v1.Descriptor{ + Platform: &v1.Platform{OS: "linux", Architecture: "amd64"}, + }, + }), + wantErr: true, + }} { + t.Run(c.desc, func(t *testing.T) { + _, err := buildCommandMap(c.idx) + gotErr := (err != nil) + if gotErr != c.wantErr { + t.Fatalf("got err: %v, want err: %t", err, c.wantErr) + } + }) + } +} diff --git a/pkg/pod/entrypoint_lookup_test.go b/pkg/pod/entrypoint_lookup_test.go index 694ec56b1e7..76d89d20a40 100644 --- a/pkg/pod/entrypoint_lookup_test.go +++ b/pkg/pod/entrypoint_lookup_test.go @@ -66,7 +66,7 @@ func TestResolveEntrypoints(t *testing.T) { "reg.io/multi/arch:latest": &data{id: multi}, } - got, err := resolveEntrypoints(ctx, cache, "namespace", "serviceAccountName", []corev1.Container{{ + got, err := resolveEntrypoints(ctx, cache, "namespace", "serviceAccountName", []corev1.LocalObjectReference{{Name: "imageSecret"}}, []corev1.Container{{ // This step specifies its command, so there's nothing to // resolve. Image: "fully-specified", @@ -143,7 +143,7 @@ type data struct { seen bool // Whether the image has been looked up before. } -func (f fakeCache) get(ctx context.Context, ref name.Reference, _, _ string) (*imageData, error) { +func (f fakeCache) get(ctx context.Context, ref name.Reference, _, _ string, _ []corev1.LocalObjectReference) (*imageData, error) { if d, ok := ref.(name.Digest); ok { if data, found := f[d.String()]; found { return data.id, nil diff --git a/pkg/pod/pod.go b/pkg/pod/pod.go index b3c024c2486..1c63d690f2c 100644 --- a/pkg/pod/pod.go +++ b/pkg/pod/pod.go @@ -162,8 +162,15 @@ func (b *Builder) Build(ctx context.Context, taskRun *v1beta1.TaskRun, taskSpec initContainers = append(initContainers, *workingDirInit) } + // By default, use an empty pod template and take the one defined in the task run spec if any + podTemplate := pod.Template{} + + if taskRun.Spec.PodTemplate != nil { + podTemplate = *taskRun.Spec.PodTemplate + } + // Resolve entrypoint for any steps that don't specify command. - stepContainers, err = resolveEntrypoints(ctx, b.EntrypointCache, taskRun.Namespace, taskRun.Spec.ServiceAccountName, stepContainers) + stepContainers, err = resolveEntrypoints(ctx, b.EntrypointCache, taskRun.Namespace, taskRun.Spec.ServiceAccountName, podTemplate.ImagePullSecrets, stepContainers) if err != nil { return nil, err } @@ -263,13 +270,6 @@ func (b *Builder) Build(ctx context.Context, taskRun *v1beta1.TaskRun, taskSpec stepContainers[i].Name = names.SimpleNameGenerator.RestrictLength(StepName(s.Name, i)) } - // By default, use an empty pod template and take the one defined in the task run spec if any - podTemplate := pod.Template{} - - if taskRun.Spec.PodTemplate != nil { - podTemplate = *taskRun.Spec.PodTemplate - } - // Add podTemplate Volumes to the explicitly declared use volumes volumes = append(volumes, taskSpec.Volumes...) volumes = append(volumes, podTemplate.Volumes...) diff --git a/pkg/pod/pod_test.go b/pkg/pod/pod_test.go index 66ef463d4e2..06c6dbd4d3c 100644 --- a/pkg/pod/pod_test.go +++ b/pkg/pod/pod_test.go @@ -107,7 +107,6 @@ func TestPodBuild(t *testing.T) { trName string ts v1beta1.TaskSpec featureFlags map[string]string - overrideHomeEnv *bool want *corev1.PodSpec wantAnnotations map[string]string wantPodName string @@ -1653,7 +1652,6 @@ func TestPodBuildwithAlphaAPIEnabled(t *testing.T) { trs v1beta1.TaskRunSpec trAnnotation map[string]string ts v1beta1.TaskSpec - overrideHomeEnv *bool want *corev1.PodSpec wantAnnotations map[string]string }{{ diff --git a/pkg/reconciler/pipelinerun/pipelinerun.go b/pkg/reconciler/pipelinerun/pipelinerun.go index e3baec8bfaf..cfecb4a0f66 100644 --- a/pkg/reconciler/pipelinerun/pipelinerun.go +++ b/pkg/reconciler/pipelinerun/pipelinerun.go @@ -324,12 +324,7 @@ func (c *Reconciler) reconcile(ctx context.Context, pr *v1beta1.PipelineRun, get // When pipeline run is pending, return to avoid creating the task if pr.IsPending() { - pr.Status.SetCondition(&apis.Condition{ - Type: apis.ConditionSucceeded, - Status: corev1.ConditionUnknown, - Reason: ReasonPending, - Message: fmt.Sprintf("PipelineRun %q is pending", pr.Name), - }) + pr.Status.MarkRunning(ReasonPending, fmt.Sprintf("PipelineRun %q is pending", pr.Name)) return nil } @@ -734,10 +729,7 @@ func (c *Reconciler) createTaskRun(ctx context.Context, rprt *resources.Resolved // is a retry addRetryHistory(tr) clearStatus(tr) - tr.Status.SetCondition(&apis.Condition{ - Type: apis.ConditionSucceeded, - Status: corev1.ConditionUnknown, - }) + tr.Status.MarkResourceOngoing("", "") logger.Infof("Updating taskrun %s with cleared status and retry history (length: %d).", tr.GetName(), len(tr.Status.RetriesStatus)) return c.PipelineClientSet.TektonV1beta1().TaskRuns(pr.Namespace).UpdateStatus(ctx, tr, metav1.UpdateOptions{}) } diff --git a/pkg/reconciler/pipelinerun/pipelinerun_test.go b/pkg/reconciler/pipelinerun/pipelinerun_test.go index 09d5638b643..39d9d2825f5 100644 --- a/pkg/reconciler/pipelinerun/pipelinerun_test.go +++ b/pkg/reconciler/pipelinerun/pipelinerun_test.go @@ -5153,7 +5153,7 @@ func TestReconcileWithWhenExpressionsWithTaskResults(t *testing.T) { wantEvents := []string{ "Normal Started", - "Normal Running Tasks Completed: 1 \\(Failed: 0, Cancelled 0\\), Incomplete: 1, Skipped: 2", + "Normal Running Tasks Completed: 1 \\(Failed: 0, Cancelled 0\\), Incomplete: 2, Skipped: 1", } pipelineRun, clients := prt.reconcileRun("foo", "test-pipeline-run-different-service-accs", wantEvents, false) @@ -5206,14 +5206,12 @@ func TestReconcileWithWhenExpressionsWithTaskResults(t *testing.T) { Operator: "in", Values: []string{"missing"}, }}, - }, { - Name: "d-task", }} if d := cmp.Diff(actualSkippedTasks, expectedSkippedTasks); d != "" { t.Errorf("expected to find Skipped Tasks %v. Diff %s", expectedSkippedTasks, diff.PrintWantGot(d)) } - skippedTasks := []string{"c-task", "d-task"} + skippedTasks := []string{"c-task"} for _, skippedTask := range skippedTasks { labelSelector := fmt.Sprintf("tekton.dev/pipelineTask=%s,tekton.dev/pipelineRun=test-pipeline-run-different-service-accs", skippedTask) actualSkippedTask, err := clients.Pipeline.TektonV1beta1().TaskRuns("foo").List(prt.TestAssets.Ctx, metav1.ListOptions{ diff --git a/pkg/reconciler/taskrun/resources/apply.go b/pkg/reconciler/taskrun/resources/apply.go index fcbe23c13ae..c8dcb75320d 100644 --- a/pkg/reconciler/taskrun/resources/apply.go +++ b/pkg/reconciler/taskrun/resources/apply.go @@ -111,10 +111,10 @@ func ApplyResources(spec *v1beta1.TaskSpec, resolvedResources map[string]v1beta1 // ApplyContexts applies the substitution from $(context.(taskRun|task).*) with the specified values. // Uses "" as a default if a value is not available. -func ApplyContexts(spec *v1beta1.TaskSpec, rtr *ResolvedTaskResources, tr *v1beta1.TaskRun) *v1beta1.TaskSpec { +func ApplyContexts(spec *v1beta1.TaskSpec, taskName string, tr *v1beta1.TaskRun) *v1beta1.TaskSpec { replacements := map[string]string{ "context.taskRun.name": tr.Name, - "context.task.name": rtr.TaskName, + "context.task.name": taskName, "context.taskRun.namespace": tr.Namespace, "context.taskRun.uid": string(tr.ObjectMeta.UID), "context.task.retry-count": strconv.Itoa(len(tr.Status.RetriesStatus)), diff --git a/pkg/reconciler/taskrun/resources/apply_test.go b/pkg/reconciler/taskrun/resources/apply_test.go index 6375e74b8b6..29c256969b6 100644 --- a/pkg/reconciler/taskrun/resources/apply_test.go +++ b/pkg/reconciler/taskrun/resources/apply_test.go @@ -876,16 +876,14 @@ func TestApplyWorkspaces_IsolatedWorkspaces(t *testing.T) { func TestContext(t *testing.T) { for _, tc := range []struct { description string - rtr resources.ResolvedTaskResources + taskName string tr v1beta1.TaskRun spec v1beta1.TaskSpec want v1beta1.TaskSpec }{{ description: "context taskName replacement without taskRun in spec container", - rtr: resources.ResolvedTaskResources{ - TaskName: "Task1", - }, - tr: v1beta1.TaskRun{}, + taskName: "Task1", + tr: v1beta1.TaskRun{}, spec: v1beta1.TaskSpec{ Steps: []v1beta1.Step{{ Container: corev1.Container{ @@ -904,9 +902,7 @@ func TestContext(t *testing.T) { }, }, { description: "context taskName replacement with taskRun in spec container", - rtr: resources.ResolvedTaskResources{ - TaskName: "Task1", - }, + taskName: "Task1", tr: v1beta1.TaskRun{ ObjectMeta: metav1.ObjectMeta{ Name: "taskrunName", @@ -930,9 +926,7 @@ func TestContext(t *testing.T) { }, }, { description: "context taskRunName replacement with defined taskRun in spec container", - rtr: resources.ResolvedTaskResources{ - TaskName: "Task1", - }, + taskName: "Task1", tr: v1beta1.TaskRun{ ObjectMeta: metav1.ObjectMeta{ Name: "taskrunName", @@ -956,10 +950,8 @@ func TestContext(t *testing.T) { }, }, { description: "context taskRunName replacement with no defined taskRun name in spec container", - rtr: resources.ResolvedTaskResources{ - TaskName: "Task1", - }, - tr: v1beta1.TaskRun{}, + taskName: "Task1", + tr: v1beta1.TaskRun{}, spec: v1beta1.TaskSpec{ Steps: []v1beta1.Step{{ Container: corev1.Container{ @@ -978,10 +970,8 @@ func TestContext(t *testing.T) { }, }, { description: "context taskRun namespace replacement with no defined namepsace in spec container", - rtr: resources.ResolvedTaskResources{ - TaskName: "Task1", - }, - tr: v1beta1.TaskRun{}, + taskName: "Task1", + tr: v1beta1.TaskRun{}, spec: v1beta1.TaskSpec{ Steps: []v1beta1.Step{{ Container: corev1.Container{ @@ -1000,9 +990,7 @@ func TestContext(t *testing.T) { }, }, { description: "context taskRun namespace replacement with defined namepsace in spec container", - rtr: resources.ResolvedTaskResources{ - TaskName: "Task1", - }, + taskName: "Task1", tr: v1beta1.TaskRun{ ObjectMeta: metav1.ObjectMeta{ Name: "taskrunName", @@ -1027,7 +1015,6 @@ func TestContext(t *testing.T) { }, }, { description: "context taskRunName replacement with no defined taskName in spec container", - rtr: resources.ResolvedTaskResources{}, tr: v1beta1.TaskRun{}, spec: v1beta1.TaskSpec{ Steps: []v1beta1.Step{{ @@ -1047,9 +1034,7 @@ func TestContext(t *testing.T) { }, }, { description: "context UID replacement", - rtr: resources.ResolvedTaskResources{ - TaskName: "Task1", - }, + taskName: "Task1", tr: v1beta1.TaskRun{ ObjectMeta: metav1.ObjectMeta{ UID: "UID-1", @@ -1073,7 +1058,6 @@ func TestContext(t *testing.T) { }, }, { description: "context retry count replacement", - rtr: resources.ResolvedTaskResources{}, tr: v1beta1.TaskRun{ Status: v1beta1.TaskRunStatus{ TaskRunStatusFields: v1beta1.TaskRunStatusFields{ @@ -1113,7 +1097,6 @@ func TestContext(t *testing.T) { }, }, { description: "context retry count replacement with task that never retries", - rtr: resources.ResolvedTaskResources{}, tr: v1beta1.TaskRun{}, spec: v1beta1.TaskSpec{ Steps: []v1beta1.Step{{ @@ -1133,7 +1116,7 @@ func TestContext(t *testing.T) { }, }} { t.Run(tc.description, func(t *testing.T) { - got := resources.ApplyContexts(&tc.spec, &tc.rtr, &tc.tr) + got := resources.ApplyContexts(&tc.spec, tc.taskName, &tc.tr) if d := cmp.Diff(&tc.want, got); d != "" { t.Errorf(diff.PrintWantGot(d)) } diff --git a/pkg/reconciler/taskrun/taskrun.go b/pkg/reconciler/taskrun/taskrun.go index 7f2d0d46700..4e9994842bb 100644 --- a/pkg/reconciler/taskrun/taskrun.go +++ b/pkg/reconciler/taskrun/taskrun.go @@ -39,7 +39,6 @@ import ( resourcelisters "github.com/tektoncd/pipeline/pkg/client/resource/listers/resource/v1alpha1" "github.com/tektoncd/pipeline/pkg/clock" "github.com/tektoncd/pipeline/pkg/internal/affinityassistant" - "github.com/tektoncd/pipeline/pkg/internal/deprecated" "github.com/tektoncd/pipeline/pkg/internal/limitrange" podconvert "github.com/tektoncd/pipeline/pkg/pod" tknreconciler "github.com/tektoncd/pipeline/pkg/reconciler" @@ -137,14 +136,6 @@ func (c *Reconciler) ReconcileKind(ctx context.Context, tr *v1beta1.TaskRun) pkg return err } - go func(metrics *taskrunmetrics.Recorder) { - if err := metrics.DurationAndCount(tr); err != nil { - logger.Warnf("Failed to log the metrics : %v", err) - } - if err := metrics.CloudEvents(tr); err != nil { - logger.Warnf("Failed to log the metrics : %v", err) - } - }(c.metrics) return c.finishReconcileUpdateEmitEvents(ctx, tr, before, nil) } @@ -195,6 +186,26 @@ func (c *Reconciler) ReconcileKind(ctx context.Context, tr *v1beta1.TaskRun) pkg } return nil } + +func (c *Reconciler) durationAndCountMetrics(ctx context.Context, tr *v1beta1.TaskRun) { + logger := logging.FromContext(ctx) + if tr.IsDone() { + newTr, err := c.taskRunLister.TaskRuns(tr.Namespace).Get(tr.Name) + if err != nil { + logger.Errorf("Error getting TaskRun %s when updating metrics: %w", tr.Name, err) + } + before := newTr.Status.GetCondition(apis.ConditionSucceeded) + go func(metrics *taskrunmetrics.Recorder) { + if err := metrics.DurationAndCount(tr, before); err != nil { + logger.Warnf("Failed to log the metrics : %v", err) + } + if err := metrics.CloudEvents(tr); err != nil { + logger.Warnf("Failed to log the metrics : %v", err) + } + }(c.metrics) + } +} + func (c *Reconciler) stopSidecars(ctx context.Context, tr *v1beta1.TaskRun) error { logger := logging.FromContext(ctx) // do not continue without knowing the associated pod @@ -365,6 +376,7 @@ func (c *Reconciler) prepare(ctx context.Context, tr *v1beta1.TaskRun) (*v1beta1 // error but it does not sync updates back to etcd. It does not emit events. // `reconcile` consumes spec and resources returned by `prepare` func (c *Reconciler) reconcile(ctx context.Context, tr *v1beta1.TaskRun, rtr *resources.ResolvedTaskResources) error { + defer c.durationAndCountMetrics(ctx, tr) logger := logging.FromContext(ctx) recorder := controller.GetEventRecorder(ctx) // Get the TaskRun's Pod if it should have one. Otherwise, create the Pod. @@ -645,7 +657,7 @@ func (c *Reconciler) createPod(ctx context.Context, tr *v1beta1.TaskRun, rtr *re ts = resources.ApplyParameters(ts, tr, defaults...) // Apply context substitution from the taskrun - ts = resources.ApplyContexts(ts, rtr, tr) + ts = resources.ApplyContexts(ts, rtr.TaskName, tr) // Apply bound resource substitution from the taskrun. ts = resources.ApplyResources(ts, inputResources, "inputs") @@ -685,8 +697,6 @@ func (c *Reconciler) createPod(ctx context.Context, tr *v1beta1.TaskRun, rtr *re pod, err := podbuilder.Build(ctx, tr, *ts, limitrange.NewTransformer(ctx, tr.Namespace, c.limitrangeLister), affinityassistant.NewTransformer(ctx, tr.Annotations), - deprecated.NewOverrideWorkingDirTransformer(ctx), - deprecated.NewOverrideHomeTransformer(ctx), ) if err != nil { return nil, fmt.Errorf("translating TaskSpec to Pod: %w", err) diff --git a/pkg/reconciler/taskrun/taskrun_test.go b/pkg/reconciler/taskrun/taskrun_test.go index 4f92b5d47f1..e9ca48381a5 100644 --- a/pkg/reconciler/taskrun/taskrun_test.go +++ b/pkg/reconciler/taskrun/taskrun_test.go @@ -711,140 +711,6 @@ func TestReconcile_ExplicitDefaultSA(t *testing.T) { } } -// TestReconcile_FeatureFlags tests taskruns with and without feature flags set -// to ensure the 'feature-flags' config map can be used to disable the -// corresponding behavior. -func TestReconcile_FeatureFlags(t *testing.T) { - taskWithEnvVar := &v1beta1.Task{ - ObjectMeta: objectMeta("test-task-with-env-var", "foo"), - Spec: v1beta1.TaskSpec{ - Steps: []v1beta1.Step{{ - Container: corev1.Container{ - Image: "foo", - Name: "simple-step", - Command: []string{"/mycmd"}, - Env: []corev1.EnvVar{{ - Name: "foo", - Value: "bar", - }}, - }, - }}, - }, - } - taskRunWithDisableHomeEnv := &v1beta1.TaskRun{ - ObjectMeta: objectMeta("test-taskrun-run-home-env", "foo"), - Spec: v1beta1.TaskRunSpec{ - TaskRef: &v1beta1.TaskRef{ - Name: taskWithEnvVar.Name, - }, - }, - } - taskRunWithDisableWorkingDirOverwrite := &v1beta1.TaskRun{ - ObjectMeta: objectMeta("test-taskrun-run-working-dir", "foo"), - Spec: v1beta1.TaskRunSpec{ - TaskRef: &v1beta1.TaskRef{ - Name: simpleTask.Name, - }, - }, - } - d := test.Data{ - TaskRuns: []*v1beta1.TaskRun{taskRunWithDisableHomeEnv, taskRunWithDisableWorkingDirOverwrite}, - Tasks: []*v1beta1.Task{simpleTask, taskWithEnvVar}, - } - for _, tc := range []struct { - name string - taskRun *v1beta1.TaskRun - featureFlag string - wantPod *corev1.Pod - }{{ - name: "disable-home-env-overwrite", - taskRun: taskRunWithDisableHomeEnv, - featureFlag: "disable-home-env-overwrite", - wantPod: expectedPod("test-taskrun-run-home-env-pod", "test-task-with-env-var", "test-taskrun-run-home-env", "foo", config.DefaultServiceAccountValue, false, nil, []stepForExpectedPod{{ - image: "foo", - name: "simple-step", - cmd: "/mycmd", - envVars: map[string]string{"foo": "bar"}, - }}), - }, { - name: "disable-working-dir-overwrite", - taskRun: taskRunWithDisableWorkingDirOverwrite, - featureFlag: "disable-working-directory-overwrite", - wantPod: expectedPod("test-taskrun-run-working-dir-pod", "test-task", "test-taskrun-run-working-dir", "foo", config.DefaultServiceAccountValue, false, nil, []stepForExpectedPod{{ - image: "foo", - name: "simple-step", - cmd: "/mycmd", - }}), - }} { - t.Run(tc.name, func(t *testing.T) { - d.ConfigMaps = []*corev1.ConfigMap{ - { - ObjectMeta: metav1.ObjectMeta{Name: config.GetFeatureFlagsConfigName(), Namespace: system.Namespace()}, - Data: map[string]string{ - tc.featureFlag: "true", - }, - }, - } - testAssets, cancel := getTaskRunController(t, d) - defer cancel() - c := testAssets.Controller - clients := testAssets.Clients - saName := tc.taskRun.Spec.ServiceAccountName - if saName == "" { - saName = "default" - } - if _, err := clients.Kube.CoreV1().ServiceAccounts(tc.taskRun.Namespace).Create(testAssets.Ctx, &corev1.ServiceAccount{ - ObjectMeta: metav1.ObjectMeta{ - Name: saName, - Namespace: tc.taskRun.Namespace, - }, - }, metav1.CreateOptions{}); err != nil { - t.Fatal(err) - } - if err := c.Reconciler.Reconcile(testAssets.Ctx, getRunName(tc.taskRun)); err == nil { - t.Error("Wanted a wrapped requeue error, but got nil.") - } else if ok, _ := controller.IsRequeueKey(err); !ok { - t.Errorf("expected no error. Got error %v", err) - } - if len(clients.Kube.Actions()) == 0 { - t.Errorf("Expected actions to be logged in the kubeclient, got none") - } - - tr, err := clients.Pipeline.TektonV1beta1().TaskRuns(tc.taskRun.Namespace).Get(testAssets.Ctx, tc.taskRun.Name, metav1.GetOptions{}) - if err != nil { - t.Fatalf("getting updated taskrun: %v", err) - } - condition := tr.Status.GetCondition(apis.ConditionSucceeded) - if condition == nil || condition.Status != corev1.ConditionUnknown { - t.Errorf("Expected invalid TaskRun to have in progress status, but had %v", condition) - } - if condition != nil && condition.Reason != v1beta1.TaskRunReasonRunning.String() { - t.Errorf("Expected reason %q but was %s", v1beta1.TaskRunReasonRunning.String(), condition.Reason) - } - - if tr.Status.PodName == "" { - t.Fatalf("Reconcile didn't set pod name") - } - - pod, err := clients.Kube.CoreV1().Pods(tr.Namespace).Get(testAssets.Ctx, tr.Status.PodName, metav1.GetOptions{}) - if err != nil { - t.Fatalf("Failed to fetch build pod: %v", err) - } - - if d := cmp.Diff(tc.wantPod.ObjectMeta, pod.ObjectMeta, ignoreRandomPodNameSuffix); d != "" { - t.Errorf("Pod metadata doesn't match %s", diff.PrintWantGot(d)) - } - - if d := cmp.Diff(tc.wantPod.Spec, pod.Spec, resourceQuantityCmp, volumeSort, volumeMountSort, ignoreEnvVarOrdering); d != "" { - t.Errorf("Pod spec doesn't match, %s", diff.PrintWantGot(d)) - } - if len(clients.Kube.Actions()) == 0 { - t.Fatalf("Expected actions to be logged in the kubeclient, got none") - } - }) - } -} - // TestReconcile_CloudEvents runs reconcile with a cloud event sink configured // to ensure that events are sent in different cases func TestReconcile_CloudEvents(t *testing.T) { diff --git a/pkg/taskrunmetrics/metrics.go b/pkg/taskrunmetrics/metrics.go index ef8ed87ba52..2f540dd55a1 100644 --- a/pkg/taskrunmetrics/metrics.go +++ b/pkg/taskrunmetrics/metrics.go @@ -31,6 +31,7 @@ import ( "go.opencensus.io/tag" "go.uber.org/zap" corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/equality" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/labels" "knative.dev/pkg/apis" @@ -278,14 +279,20 @@ func nilInsertTag(task, taskrun string) []tag.Mutator { // DurationAndCount logs the duration of TaskRun execution and // count for number of TaskRuns succeed or failed // returns an error if its failed to log the metrics -func (r *Recorder) DurationAndCount(tr *v1beta1.TaskRun) error { - r.mutex.Lock() - defer r.mutex.Unlock() +func (r *Recorder) DurationAndCount(tr *v1beta1.TaskRun, beforeCondition *apis.Condition) error { if !r.initialized { return fmt.Errorf("ignoring the metrics recording for %s , failed to initialize the metrics recorder", tr.Name) } + afterCondition := tr.Status.GetCondition(apis.ConditionSucceeded) + if equality.Semantic.DeepEqual(beforeCondition, afterCondition) { + return nil + } + + r.mutex.Lock() + defer r.mutex.Unlock() + duration := time.Since(tr.Status.StartTime.Time) if tr.Status.CompletionTime != nil { duration = tr.Status.CompletionTime.Sub(tr.Status.StartTime.Time) diff --git a/pkg/taskrunmetrics/metrics_test.go b/pkg/taskrunmetrics/metrics_test.go index 143846223a4..4bb8a93db7c 100644 --- a/pkg/taskrunmetrics/metrics_test.go +++ b/pkg/taskrunmetrics/metrics_test.go @@ -59,7 +59,12 @@ func getConfigContext() context.Context { func TestUninitializedMetrics(t *testing.T) { metrics := Recorder{} - if err := metrics.DurationAndCount(&v1beta1.TaskRun{}); err == nil { + beforeCondition := &apis.Condition{ + Type: apis.ConditionReady, + Status: corev1.ConditionUnknown, + } + + if err := metrics.DurationAndCount(&v1beta1.TaskRun{}, beforeCondition); err == nil { t.Error("DurationCount recording expected to return error but got nil") } if err := metrics.RunningTaskRuns(nil); err == nil { @@ -125,15 +130,16 @@ func TestMetricsOnStore(t *testing.T) { func TestRecordTaskRunDurationCount(t *testing.T) { for _, c := range []struct { - name string - taskRun *v1beta1.TaskRun - metricName string // "taskrun_duration_seconds" or "pipelinerun_taskrun_duration_seconds" - expectedTags map[string]string - expectedCountTags map[string]string - expectedDuration float64 - expectedCount int64 + name string + taskRun *v1beta1.TaskRun + metricName string // "taskrun_duration_seconds" or "pipelinerun_taskrun_duration_seconds" + expectedDurationTags map[string]string + expectedCountTags map[string]string + expectedDuration float64 + expectedCount int64 + beforeCondition *apis.Condition }{{ - name: "for succeeded task", + name: "for succeeded taskrun", taskRun: &v1beta1.TaskRun{ ObjectMeta: metav1.ObjectMeta{Name: "taskrun-1", Namespace: "ns"}, Spec: v1beta1.TaskRunSpec{ @@ -153,7 +159,7 @@ func TestRecordTaskRunDurationCount(t *testing.T) { }, }, metricName: "taskrun_duration_seconds", - expectedTags: map[string]string{ + expectedDurationTags: map[string]string{ "task": "task-1", "taskrun": "taskrun-1", "namespace": "ns", @@ -164,8 +170,74 @@ func TestRecordTaskRunDurationCount(t *testing.T) { }, expectedDuration: 60, expectedCount: 1, + beforeCondition: nil, }, { - name: "for failed task", + name: "for succeeded taskrun with before condition", + taskRun: &v1beta1.TaskRun{ + ObjectMeta: metav1.ObjectMeta{Name: "taskrun-1", Namespace: "ns"}, + Spec: v1beta1.TaskRunSpec{ + TaskRef: &v1beta1.TaskRef{Name: "task-1"}, + }, + Status: v1beta1.TaskRunStatus{ + Status: duckv1beta1.Status{ + Conditions: duckv1beta1.Conditions{{ + Type: apis.ConditionSucceeded, + Status: corev1.ConditionTrue, + }}, + }, + TaskRunStatusFields: v1beta1.TaskRunStatusFields{ + StartTime: &startTime, + CompletionTime: &completionTime, + }, + }, + }, + metricName: "taskrun_duration_seconds", + expectedDurationTags: map[string]string{ + "task": "task-1", + "taskrun": "taskrun-1", + "namespace": "ns", + "status": "success", + }, + expectedCountTags: map[string]string{ + "status": "success", + }, + expectedDuration: 60, + expectedCount: 1, + beforeCondition: &apis.Condition{ + Type: apis.ConditionReady, + Status: corev1.ConditionUnknown, + }, + }, { + name: "for succeeded taskrun recount", + taskRun: &v1beta1.TaskRun{ + ObjectMeta: metav1.ObjectMeta{Name: "taskrun-1", Namespace: "ns"}, + Spec: v1beta1.TaskRunSpec{ + TaskRef: &v1beta1.TaskRef{Name: "task-1"}, + }, + Status: v1beta1.TaskRunStatus{ + Status: duckv1beta1.Status{ + Conditions: duckv1beta1.Conditions{{ + Type: apis.ConditionSucceeded, + Status: corev1.ConditionTrue, + }}, + }, + TaskRunStatusFields: v1beta1.TaskRunStatusFields{ + StartTime: &startTime, + CompletionTime: &completionTime, + }, + }, + }, + metricName: "taskrun_duration_seconds", + expectedDurationTags: nil, + expectedCountTags: nil, + expectedDuration: 0, + expectedCount: 0, + beforeCondition: &apis.Condition{ + Type: apis.ConditionSucceeded, + Status: corev1.ConditionTrue, + }, + }, { + name: "for failed taskrun", taskRun: &v1beta1.TaskRun{ ObjectMeta: metav1.ObjectMeta{Name: "taskrun-1", Namespace: "ns"}, Spec: v1beta1.TaskRunSpec{ @@ -185,7 +257,7 @@ func TestRecordTaskRunDurationCount(t *testing.T) { }, }, metricName: "taskrun_duration_seconds", - expectedTags: map[string]string{ + expectedDurationTags: map[string]string{ "task": "task-1", "taskrun": "taskrun-1", "namespace": "ns", @@ -196,6 +268,7 @@ func TestRecordTaskRunDurationCount(t *testing.T) { }, expectedDuration: 60, expectedCount: 1, + beforeCondition: nil, }, { name: "for succeeded taskrun in pipelinerun", taskRun: &v1beta1.TaskRun{ @@ -223,7 +296,7 @@ func TestRecordTaskRunDurationCount(t *testing.T) { }, }, metricName: "pipelinerun_taskrun_duration_seconds", - expectedTags: map[string]string{ + expectedDurationTags: map[string]string{ "pipeline": "pipeline-1", "pipelinerun": "pipelinerun-1", "task": "task-1", @@ -236,6 +309,7 @@ func TestRecordTaskRunDurationCount(t *testing.T) { }, expectedDuration: 60, expectedCount: 1, + beforeCondition: nil, }, { name: "for failed taskrun in pipelinerun", taskRun: &v1beta1.TaskRun{ @@ -263,7 +337,7 @@ func TestRecordTaskRunDurationCount(t *testing.T) { }, }, metricName: "pipelinerun_taskrun_duration_seconds", - expectedTags: map[string]string{ + expectedDurationTags: map[string]string{ "pipeline": "pipeline-1", "pipelinerun": "pipelinerun-1", "task": "task-1", @@ -276,6 +350,7 @@ func TestRecordTaskRunDurationCount(t *testing.T) { }, expectedDuration: 60, expectedCount: 1, + beforeCondition: nil, }} { t.Run(c.name, func(t *testing.T) { unregisterMetrics() @@ -286,11 +361,20 @@ func TestRecordTaskRunDurationCount(t *testing.T) { t.Fatalf("NewRecorder: %v", err) } - if err := metrics.DurationAndCount(c.taskRun); err != nil { + if err := metrics.DurationAndCount(c.taskRun, c.beforeCondition); err != nil { t.Errorf("DurationAndCount: %v", err) } - metricstest.CheckLastValueData(t, c.metricName, c.expectedTags, c.expectedDuration) - metricstest.CheckCountData(t, "taskrun_count", c.expectedCountTags, c.expectedCount) + if c.expectedCountTags != nil { + metricstest.CheckCountData(t, "taskrun_count", c.expectedCountTags, c.expectedCount) + } else { + metricstest.CheckStatsNotReported(t, "taskrun_count") + } + if c.expectedDurationTags != nil { + metricstest.CheckLastValueData(t, c.metricName, c.expectedDurationTags, c.expectedDuration) + } else { + metricstest.CheckStatsNotReported(t, c.metricName) + + } }) } } diff --git a/pkg/termination/write_test.go b/pkg/termination/write_test.go index 7d5ca1ad76d..3ba07468cce 100644 --- a/pkg/termination/write_test.go +++ b/pkg/termination/write_test.go @@ -84,6 +84,6 @@ func TestMaxSizeFile(t *testing.T) { }} if err := WriteMessage(tmpFile.Name(), output); !errors.Is(err, aboveMax) { - t.Fatalf("Expected MessageLengthError, receved: %v", err) + t.Fatalf("Expected MessageLengthError, received: %v", err) } } diff --git a/third_party/LICENSE b/third_party/LICENSE new file mode 100644 index 00000000000..6a66aea5eaf --- /dev/null +++ b/third_party/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2009 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/third_party/vendor/golang.org/x/crypto/LICENSE b/third_party/vendor/golang.org/x/crypto/LICENSE new file mode 100644 index 00000000000..6a66aea5eaf --- /dev/null +++ b/third_party/vendor/golang.org/x/crypto/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2009 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/third_party/vendor/golang.org/x/net/LICENSE b/third_party/vendor/golang.org/x/net/LICENSE new file mode 100644 index 00000000000..6a66aea5eaf --- /dev/null +++ b/third_party/vendor/golang.org/x/net/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2009 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/third_party/vendor/golang.org/x/sys/cpu/LICENSE b/third_party/vendor/golang.org/x/sys/cpu/LICENSE new file mode 100644 index 00000000000..6a66aea5eaf --- /dev/null +++ b/third_party/vendor/golang.org/x/sys/cpu/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2009 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/third_party/vendor/golang.org/x/text/LICENSE b/third_party/vendor/golang.org/x/text/LICENSE new file mode 100644 index 00000000000..6a66aea5eaf --- /dev/null +++ b/third_party/vendor/golang.org/x/text/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2009 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/google/go-containerregistry/pkg/authn/keychain.go b/vendor/github.com/google/go-containerregistry/pkg/authn/keychain.go index f79b7260e85..2020c41c17d 100644 --- a/vendor/github.com/google/go-containerregistry/pkg/authn/keychain.go +++ b/vendor/github.com/google/go-containerregistry/pkg/authn/keychain.go @@ -101,10 +101,8 @@ func (dk *defaultKeychain) Resolve(target Resource) (Authenticator, error) { } } else { f, err := os.Open(filepath.Join(os.Getenv("XDG_RUNTIME_DIR"), "containers/auth.json")) - if os.IsNotExist(err) { + if err != nil { return Anonymous, nil - } else if err != nil { - return nil, err } defer f.Close() cf, err = config.LoadFromReader(f) @@ -156,9 +154,14 @@ func NewKeychainFromHelper(h Helper) Keychain { return wrapper{h} } type wrapper struct{ h Helper } func (w wrapper) Resolve(r Resource) (Authenticator, error) { - u, p, err := w.h.Get(r.String()) + u, p, err := w.h.Get(r.RegistryStr()) if err != nil { return Anonymous, nil } + // If the secret being stored is an identity token, the Username should be set to + // ref: https://docs.docker.com/engine/reference/commandline/login/#credential-helper-protocol + if u == "" { + return FromConfig(AuthConfig{Username: u, IdentityToken: p}), nil + } return FromConfig(AuthConfig{Username: u, Password: p}), nil } diff --git a/vendor/github.com/google/go-containerregistry/pkg/v1/google/auth.go b/vendor/github.com/google/go-containerregistry/pkg/v1/google/auth.go index 4ce979577bf..343eae0bc8c 100644 --- a/vendor/github.com/google/go-containerregistry/pkg/v1/google/auth.go +++ b/vendor/github.com/google/go-containerregistry/pkg/v1/google/auth.go @@ -19,7 +19,6 @@ import ( "context" "encoding/json" "fmt" - "os" "os/exec" "time" @@ -155,7 +154,7 @@ func (gs gcloudSource) Token() (*oauth2.Token, error) { cmd.Stdout = &out // Don't attempt to interpret stderr, just pass it through. - cmd.Stderr = os.Stderr + cmd.Stderr = logs.Warn.Writer() if err := cmd.Run(); err != nil { return nil, fmt.Errorf("error executing `gcloud config config-helper`: %w", err) diff --git a/vendor/github.com/google/go-containerregistry/pkg/v1/google/keychain.go b/vendor/github.com/google/go-containerregistry/pkg/v1/google/keychain.go index 7471a01734a..482cf4a9137 100644 --- a/vendor/github.com/google/go-containerregistry/pkg/v1/google/keychain.go +++ b/vendor/github.com/google/go-containerregistry/pkg/v1/google/keychain.go @@ -15,11 +15,11 @@ package google import ( - "fmt" "strings" "sync" "github.com/google/go-containerregistry/pkg/authn" + "github.com/google/go-containerregistry/pkg/logs" ) // Keychain exports an instance of the google Keychain. @@ -28,7 +28,6 @@ var Keychain authn.Keychain = &googleKeychain{} type googleKeychain struct { once sync.Once auth authn.Authenticator - err error } // Resolve implements authn.Keychain a la docker-credential-gcr. @@ -55,27 +54,37 @@ type googleKeychain struct { func (gk *googleKeychain) Resolve(target authn.Resource) (authn.Authenticator, error) { // Only authenticate GCR and AR so it works with authn.NewMultiKeychain to fallback. host := target.RegistryStr() - if host != "gcr.io" && !strings.HasSuffix(host, ".gcr.io") && !strings.HasSuffix(host, ".pkg.dev") && !strings.HasSuffix(host, ".google.com") { + if host != "gcr.io" && + !strings.HasSuffix(host, ".gcr.io") && + !strings.HasSuffix(host, ".pkg.dev") && + !strings.HasSuffix(host, ".google.com") { return authn.Anonymous, nil } gk.once.Do(func() { - gk.auth, gk.err = resolve() + gk.auth = resolve() }) - return gk.auth, gk.err + return gk.auth, nil } -func resolve() (authn.Authenticator, error) { +func resolve() authn.Authenticator { auth, envErr := NewEnvAuthenticator() - if envErr == nil { - return auth, nil + if envErr == nil && auth != authn.Anonymous { + return auth } auth, gErr := NewGcloudAuthenticator() - if gErr == nil { - return auth, nil + if gErr == nil && auth != authn.Anonymous { + return auth } - return nil, fmt.Errorf("failed to create token source from env: %v or gcloud: %v", envErr, gErr) //nolint: errorlint + logs.Debug.Println("Failed to get any Google credentials, falling back to Anonymous") + if envErr != nil { + logs.Debug.Printf("Google env error: %v", envErr) + } + if gErr != nil { + logs.Debug.Printf("gcloud error: %v", gErr) + } + return authn.Anonymous } diff --git a/vendor/github.com/google/go-containerregistry/pkg/v1/layout/write.go b/vendor/github.com/google/go-containerregistry/pkg/v1/layout/write.go index 4c580e253a2..7c54e5f58bc 100644 --- a/vendor/github.com/google/go-containerregistry/pkg/v1/layout/write.go +++ b/vendor/github.com/google/go-containerregistry/pkg/v1/layout/write.go @@ -228,7 +228,7 @@ func (l Path) WriteBlob(hash v1.Hash, r io.ReadCloser) error { return l.writeBlob(hash, -1, r, nil) } -func (l Path) writeBlob(hash v1.Hash, size int64, r io.Reader, renamer func() (v1.Hash, error)) error { +func (l Path) writeBlob(hash v1.Hash, size int64, rc io.ReadCloser, renamer func() (v1.Hash, error)) error { if hash.Hex == "" && renamer == nil { panic("writeBlob called an invalid hash and no renamer") } @@ -264,12 +264,21 @@ func (l Path) writeBlob(hash v1.Hash, size int64, r io.Reader, renamer func() (v defer w.Close() // Write to file and exit if not renaming - if n, err := io.Copy(w, r); err != nil || renamer == nil { + if n, err := io.Copy(w, rc); err != nil || renamer == nil { return err } else if size != -1 && n != size { return fmt.Errorf("expected blob size %d, but only wrote %d", size, n) } + // Always close reader before renaming, since Close computes the digest in + // the case of streaming layers. If Close is not called explicitly, it will + // occur in a goroutine that is not guaranteed to succeed before renamer is + // called. When renamer is the layer's Digest method, it can return + // ErrNotComputed. + if err := rc.Close(); err != nil { + return err + } + // Always close file before renaming if err := w.Close(); err != nil { return err diff --git a/vendor/github.com/google/go-containerregistry/pkg/v1/platform.go b/vendor/github.com/google/go-containerregistry/pkg/v1/platform.go index b52f163bf4d..9ee91ee292a 100644 --- a/vendor/github.com/google/go-containerregistry/pkg/v1/platform.go +++ b/vendor/github.com/google/go-containerregistry/pkg/v1/platform.go @@ -15,7 +15,9 @@ package v1 import ( + "fmt" "sort" + "strings" ) // Platform represents the target os/arch for an image. @@ -28,11 +30,59 @@ type Platform struct { Features []string `json:"features,omitempty"` } +func (p Platform) String() string { + if p.OS == "" { + return "" + } + var b strings.Builder + b.WriteString(p.OS) + if p.Architecture != "" { + b.WriteString("/") + b.WriteString(p.Architecture) + } + if p.Variant != "" { + b.WriteString("/") + b.WriteString(p.Variant) + } + if p.OSVersion != "" { + b.WriteString(":") + b.WriteString(p.OSVersion) + } + return b.String() +} + +// ParsePlatform parses a string representing a Platform, if possible. +func ParsePlatform(s string) (*Platform, error) { + var p Platform + parts := strings.Split(strings.TrimSpace(s), ":") + if len(parts) == 2 { + p.OSVersion = parts[1] + } + parts = strings.Split(parts[0], "/") + if len(parts) > 0 { + p.OS = parts[0] + } + if len(parts) > 1 { + p.Architecture = parts[1] + } + if len(parts) > 2 { + p.Variant = parts[2] + } + if len(parts) > 3 { + return nil, fmt.Errorf("too many slashes in platform spec: %s", s) + } + return &p, nil +} + // Equals returns true if the given platform is semantically equivalent to this one. // The order of Features and OSFeatures is not important. func (p Platform) Equals(o Platform) bool { - return p.OS == o.OS && p.Architecture == o.Architecture && p.Variant == o.Variant && p.OSVersion == o.OSVersion && - stringSliceEqualIgnoreOrder(p.OSFeatures, o.OSFeatures) && stringSliceEqualIgnoreOrder(p.Features, o.Features) + return p.OS == o.OS && + p.Architecture == o.Architecture && + p.Variant == o.Variant && + p.OSVersion == o.OSVersion && + stringSliceEqualIgnoreOrder(p.OSFeatures, o.OSFeatures) && + stringSliceEqualIgnoreOrder(p.Features, o.Features) } // stringSliceEqual compares 2 string slices and returns if their contents are identical. diff --git a/vendor/github.com/google/go-containerregistry/pkg/v1/remote/options.go b/vendor/github.com/google/go-containerregistry/pkg/v1/remote/options.go index 3ed1d7dd076..919013b1a55 100644 --- a/vendor/github.com/google/go-containerregistry/pkg/v1/remote/options.go +++ b/vendor/github.com/google/go-containerregistry/pkg/v1/remote/options.go @@ -59,7 +59,7 @@ type Backoff = retry.Backoff var defaultRetryPredicate retry.Predicate = func(err error) bool { // Various failure modes here, as we're often reading from and writing to // the network. - if retry.IsTemporary(err) || errors.Is(err, io.ErrUnexpectedEOF) || errors.Is(err, syscall.EPIPE) { + if retry.IsTemporary(err) || errors.Is(err, io.ErrUnexpectedEOF) || errors.Is(err, io.EOF) || errors.Is(err, syscall.EPIPE) { logs.Warn.Printf("retrying %v", err) return true } diff --git a/vendor/github.com/google/go-containerregistry/pkg/v1/remote/transport/ping.go b/vendor/github.com/google/go-containerregistry/pkg/v1/remote/transport/ping.go index 897aa703c9a..29c36afe7c5 100644 --- a/vendor/github.com/google/go-containerregistry/pkg/v1/remote/transport/ping.go +++ b/vendor/github.com/google/go-containerregistry/pkg/v1/remote/transport/ping.go @@ -79,7 +79,7 @@ func ping(ctx context.Context, reg name.Registry, t http.RoundTripper) (*pingRes schemes = append(schemes, "http") } - var errs []string + var errs []error for _, scheme := range schemes { url := fmt.Sprintf("%s://%s/v2/", scheme, reg.Name()) req, err := http.NewRequest(http.MethodGet, url, nil) @@ -88,7 +88,7 @@ func ping(ctx context.Context, reg name.Registry, t http.RoundTripper) (*pingRes } resp, err := client.Do(req.WithContext(ctx)) if err != nil { - errs = append(errs, err.Error()) + errs = append(errs, err) // Potentially retry with http. continue } @@ -125,7 +125,7 @@ func ping(ctx context.Context, reg name.Registry, t http.RoundTripper) (*pingRes return nil, CheckError(resp, http.StatusOK, http.StatusUnauthorized) } } - return nil, errors.New(strings.Join(errs, "; ")) + return nil, multierrs(errs) } func pickFromMultipleChallenges(challenges []authchallenge.Challenge) authchallenge.Challenge { @@ -145,3 +145,36 @@ func pickFromMultipleChallenges(challenges []authchallenge.Challenge) authchalle return challenges[0] } + +type multierrs []error + +func (m multierrs) Error() string { + var b strings.Builder + hasWritten := false + for _, err := range m { + if hasWritten { + b.WriteString("; ") + } + hasWritten = true + b.WriteString(err.Error()) + } + return b.String() +} + +func (m multierrs) As(target interface{}) bool { + for _, err := range m { + if errors.As(err, target) { + return true + } + } + return false +} + +func (m multierrs) Is(target error) bool { + for _, err := range m { + if errors.Is(err, target) { + return true + } + } + return false +} diff --git a/vendor/github.com/google/go-containerregistry/pkg/v1/tarball/layer.go b/vendor/github.com/google/go-containerregistry/pkg/v1/tarball/layer.go index 5ec1d5515a0..ac9e14c761a 100644 --- a/vendor/github.com/google/go-containerregistry/pkg/v1/tarball/layer.go +++ b/vendor/github.com/google/go-containerregistry/pkg/v1/tarball/layer.go @@ -39,6 +39,7 @@ type layer struct { compression int annotations map[string]string estgzopts []estargz.Option + mediaType types.MediaType } // Descriptor implements partial.withDescriptor. @@ -51,7 +52,7 @@ func (l *layer) Descriptor() (*v1.Descriptor, error) { Size: l.size, Digest: digest, Annotations: l.annotations, - MediaType: types.DockerLayer, + MediaType: l.mediaType, }, nil } @@ -82,7 +83,7 @@ func (l *layer) Size() (int64, error) { // MediaType implements v1.Layer func (l *layer) MediaType() (types.MediaType, error) { - return types.DockerLayer, nil + return l.mediaType, nil } // LayerOption applies options to layer @@ -96,6 +97,13 @@ func WithCompressionLevel(level int) LayerOption { } } +// WithMediaType is a functional option for overriding the layer's media type. +func WithMediaType(mt types.MediaType) LayerOption { + return func(l *layer) { + l.mediaType = mt + } +} + // WithCompressedCaching is a functional option that overrides the // logic for accessing the compressed bytes to memoize the result // and avoid expensive repeated gzips. @@ -204,6 +212,7 @@ func LayerFromOpener(opener Opener, opts ...LayerOption) (v1.Layer, error) { layer := &layer{ compression: gzip.BestSpeed, annotations: make(map[string]string, 1), + mediaType: types.DockerLayer, } if estgz := os.Getenv("GGCR_EXPERIMENT_ESTARGZ"); estgz == "1" { diff --git a/vendor/github.com/google/go-containerregistry/pkg/v1/zz_deepcopy_generated.go b/vendor/github.com/google/go-containerregistry/pkg/v1/zz_deepcopy_generated.go index b32b8b77a12..0cb1586f1e3 100644 --- a/vendor/github.com/google/go-containerregistry/pkg/v1/zz_deepcopy_generated.go +++ b/vendor/github.com/google/go-containerregistry/pkg/v1/zz_deepcopy_generated.go @@ -1,3 +1,4 @@ +//go:build !ignore_autogenerated // +build !ignore_autogenerated // Copyright 2018 Google LLC All Rights Reserved. diff --git a/vendor/github.com/klauspost/compress/.goreleaser.yml b/vendor/github.com/klauspost/compress/.goreleaser.yml index c9014ce1da2..0af08e65e68 100644 --- a/vendor/github.com/klauspost/compress/.goreleaser.yml +++ b/vendor/github.com/klauspost/compress/.goreleaser.yml @@ -3,6 +3,7 @@ before: hooks: - ./gen.sh + - go install mvdan.cc/garble@latest builds: - @@ -31,6 +32,7 @@ builds: - mips64le goarm: - 7 + gobinary: garble - id: "s2d" binary: s2d @@ -57,6 +59,7 @@ builds: - mips64le goarm: - 7 + gobinary: garble - id: "s2sx" binary: s2sx @@ -84,6 +87,7 @@ builds: - mips64le goarm: - 7 + gobinary: garble archives: - diff --git a/vendor/github.com/klauspost/compress/README.md b/vendor/github.com/klauspost/compress/README.md index 3429879eb69..e8ff994f8bc 100644 --- a/vendor/github.com/klauspost/compress/README.md +++ b/vendor/github.com/klauspost/compress/README.md @@ -17,6 +17,13 @@ This package provides various compression algorithms. # changelog +* Jan 11, 2022 (v1.14.1) + * s2: Add stream index in [#462](https://github.com/klauspost/compress/pull/462) + * flate: Speed and efficiency improvements in [#439](https://github.com/klauspost/compress/pull/439) [#461](https://github.com/klauspost/compress/pull/461) [#455](https://github.com/klauspost/compress/pull/455) [#452](https://github.com/klauspost/compress/pull/452) [#458](https://github.com/klauspost/compress/pull/458) + * zstd: Performance improvement in [#420]( https://github.com/klauspost/compress/pull/420) [#456](https://github.com/klauspost/compress/pull/456) [#437](https://github.com/klauspost/compress/pull/437) [#467](https://github.com/klauspost/compress/pull/467) [#468](https://github.com/klauspost/compress/pull/468) + * zstd: add arm64 xxhash assembly in [#464](https://github.com/klauspost/compress/pull/464) + * Add garbled for binaries for s2 in [#445](https://github.com/klauspost/compress/pull/445) + * Aug 30, 2021 (v1.13.5) * gz/zlib/flate: Alias stdlib errors [#425](https://github.com/klauspost/compress/pull/425) * s2: Add block support to commandline tools [#413](https://github.com/klauspost/compress/pull/413) @@ -432,6 +439,13 @@ For more information see my blog post on [Fast Linear Time Compression](http://b This is implemented on Go 1.7 as "Huffman Only" mode, though not exposed for gzip. +# Other packages + +Here are other packages of good quality and pure Go (no cgo wrappers or autoconverted code): + +* [github.com/pierrec/lz4](https://github.com/pierrec/lz4) - strong multithreaded LZ4 compression. +* [github.com/cosnicolaou/pbzip2](https://github.com/cosnicolaou/pbzip2) - multithreaded bzip2 decompression. +* [github.com/dsnet/compress](https://github.com/dsnet/compress) - brotli decompression, bzip2 writer. # license diff --git a/vendor/github.com/klauspost/compress/huff0/decompress.go b/vendor/github.com/klauspost/compress/huff0/decompress.go index 9b7cc8e97bb..2a06bd1a7e5 100644 --- a/vendor/github.com/klauspost/compress/huff0/decompress.go +++ b/vendor/github.com/klauspost/compress/huff0/decompress.go @@ -20,7 +20,7 @@ type dEntrySingle struct { // double-symbols decoding type dEntryDouble struct { - seq uint16 + seq [4]byte nBits uint8 len uint8 } @@ -753,23 +753,21 @@ func (d *Decoder) Decompress4X(dst, src []byte) ([]byte, error) { br[stream2].fillFast() val := br[stream].peekBitsFast(d.actualTableLog) - v := single[val&tlMask] - br[stream].advance(uint8(v.entry)) - buf[off+bufoff*stream] = uint8(v.entry >> 8) - val2 := br[stream2].peekBitsFast(d.actualTableLog) + v := single[val&tlMask] v2 := single[val2&tlMask] + br[stream].advance(uint8(v.entry)) br[stream2].advance(uint8(v2.entry)) + buf[off+bufoff*stream] = uint8(v.entry >> 8) buf[off+bufoff*stream2] = uint8(v2.entry >> 8) val = br[stream].peekBitsFast(d.actualTableLog) - v = single[val&tlMask] - br[stream].advance(uint8(v.entry)) - buf[off+bufoff*stream+1] = uint8(v.entry >> 8) - val2 = br[stream2].peekBitsFast(d.actualTableLog) + v = single[val&tlMask] v2 = single[val2&tlMask] + br[stream].advance(uint8(v.entry)) br[stream2].advance(uint8(v2.entry)) + buf[off+bufoff*stream+1] = uint8(v.entry >> 8) buf[off+bufoff*stream2+1] = uint8(v2.entry >> 8) } @@ -780,23 +778,21 @@ func (d *Decoder) Decompress4X(dst, src []byte) ([]byte, error) { br[stream2].fillFast() val := br[stream].peekBitsFast(d.actualTableLog) - v := single[val&tlMask] - br[stream].advance(uint8(v.entry)) - buf[off+bufoff*stream] = uint8(v.entry >> 8) - val2 := br[stream2].peekBitsFast(d.actualTableLog) + v := single[val&tlMask] v2 := single[val2&tlMask] + br[stream].advance(uint8(v.entry)) br[stream2].advance(uint8(v2.entry)) + buf[off+bufoff*stream] = uint8(v.entry >> 8) buf[off+bufoff*stream2] = uint8(v2.entry >> 8) val = br[stream].peekBitsFast(d.actualTableLog) - v = single[val&tlMask] - br[stream].advance(uint8(v.entry)) - buf[off+bufoff*stream+1] = uint8(v.entry >> 8) - val2 = br[stream2].peekBitsFast(d.actualTableLog) + v = single[val&tlMask] v2 = single[val2&tlMask] + br[stream].advance(uint8(v.entry)) br[stream2].advance(uint8(v2.entry)) + buf[off+bufoff*stream+1] = uint8(v.entry >> 8) buf[off+bufoff*stream2+1] = uint8(v2.entry >> 8) } @@ -914,7 +910,7 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) { out := dst dstEvery := (dstSize + 3) / 4 - shift := (8 - d.actualTableLog) & 7 + shift := (56 + (8 - d.actualTableLog)) & 63 const tlSize = 1 << 8 single := d.dt.single[:tlSize] @@ -935,79 +931,91 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) { // Interleave 2 decodes. const stream = 0 const stream2 = 1 - br[stream].fillFast() - br[stream2].fillFast() - - v := single[br[stream].peekByteFast()>>shift].entry + br1 := &br[stream] + br2 := &br[stream2] + br1.fillFast() + br2.fillFast() + + v := single[uint8(br1.value>>shift)].entry + v2 := single[uint8(br2.value>>shift)].entry + br1.bitsRead += uint8(v) + br1.value <<= v & 63 + br2.bitsRead += uint8(v2) + br2.value <<= v2 & 63 buf[off+bufoff*stream] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 := single[br[stream2].peekByteFast()>>shift].entry buf[off+bufoff*stream2] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) - v = single[br[stream].peekByteFast()>>shift].entry + v = single[uint8(br1.value>>shift)].entry + v2 = single[uint8(br2.value>>shift)].entry + br1.bitsRead += uint8(v) + br1.value <<= v & 63 + br2.bitsRead += uint8(v2) + br2.value <<= v2 & 63 buf[off+bufoff*stream+1] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 = single[br[stream2].peekByteFast()>>shift].entry buf[off+bufoff*stream2+1] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) - v = single[br[stream].peekByteFast()>>shift].entry + v = single[uint8(br1.value>>shift)].entry + v2 = single[uint8(br2.value>>shift)].entry + br1.bitsRead += uint8(v) + br1.value <<= v & 63 + br2.bitsRead += uint8(v2) + br2.value <<= v2 & 63 buf[off+bufoff*stream+2] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 = single[br[stream2].peekByteFast()>>shift].entry buf[off+bufoff*stream2+2] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) - v = single[br[stream].peekByteFast()>>shift].entry - buf[off+bufoff*stream+3] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 = single[br[stream2].peekByteFast()>>shift].entry + v = single[uint8(br1.value>>shift)].entry + v2 = single[uint8(br2.value>>shift)].entry + br1.bitsRead += uint8(v) + br1.value <<= v & 63 + br2.bitsRead += uint8(v2) + br2.value <<= v2 & 63 buf[off+bufoff*stream2+3] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) + buf[off+bufoff*stream+3] = uint8(v >> 8) } { const stream = 2 const stream2 = 3 - br[stream].fillFast() - br[stream2].fillFast() - - v := single[br[stream].peekByteFast()>>shift].entry + br1 := &br[stream] + br2 := &br[stream2] + br1.fillFast() + br2.fillFast() + + v := single[uint8(br1.value>>shift)].entry + v2 := single[uint8(br2.value>>shift)].entry + br1.bitsRead += uint8(v) + br1.value <<= v & 63 + br2.bitsRead += uint8(v2) + br2.value <<= v2 & 63 buf[off+bufoff*stream] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 := single[br[stream2].peekByteFast()>>shift].entry buf[off+bufoff*stream2] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) - v = single[br[stream].peekByteFast()>>shift].entry + v = single[uint8(br1.value>>shift)].entry + v2 = single[uint8(br2.value>>shift)].entry + br1.bitsRead += uint8(v) + br1.value <<= v & 63 + br2.bitsRead += uint8(v2) + br2.value <<= v2 & 63 buf[off+bufoff*stream+1] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 = single[br[stream2].peekByteFast()>>shift].entry buf[off+bufoff*stream2+1] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) - v = single[br[stream].peekByteFast()>>shift].entry + v = single[uint8(br1.value>>shift)].entry + v2 = single[uint8(br2.value>>shift)].entry + br1.bitsRead += uint8(v) + br1.value <<= v & 63 + br2.bitsRead += uint8(v2) + br2.value <<= v2 & 63 buf[off+bufoff*stream+2] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 = single[br[stream2].peekByteFast()>>shift].entry buf[off+bufoff*stream2+2] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) - - v = single[br[stream].peekByteFast()>>shift].entry - buf[off+bufoff*stream+3] = uint8(v >> 8) - br[stream].advance(uint8(v)) - v2 = single[br[stream2].peekByteFast()>>shift].entry + v = single[uint8(br1.value>>shift)].entry + v2 = single[uint8(br2.value>>shift)].entry + br1.bitsRead += uint8(v) + br1.value <<= v & 63 + br2.bitsRead += uint8(v2) + br2.value <<= v2 & 63 buf[off+bufoff*stream2+3] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) + buf[off+bufoff*stream+3] = uint8(v >> 8) } off += 4 @@ -1073,7 +1081,7 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) { } // Read value and increment offset. - v := single[br.peekByteFast()>>shift].entry + v := single[uint8(br.value>>shift)].entry nBits := uint8(v) br.advance(nBits) bitsLeft -= int(nBits) @@ -1121,7 +1129,7 @@ func (d *Decoder) decompress4X8bitExactly(dst, src []byte) ([]byte, error) { out := dst dstEvery := (dstSize + 3) / 4 - const shift = 0 + const shift = 56 const tlSize = 1 << 8 const tlMask = tlSize - 1 single := d.dt.single[:tlSize] @@ -1145,37 +1153,41 @@ func (d *Decoder) decompress4X8bitExactly(dst, src []byte) ([]byte, error) { br[stream].fillFast() br[stream2].fillFast() - v := single[br[stream].peekByteFast()>>shift].entry + v := single[uint8(br[stream].value>>shift)].entry + v2 := single[uint8(br[stream2].value>>shift)].entry + br[stream].bitsRead += uint8(v) + br[stream].value <<= v & 63 + br[stream2].bitsRead += uint8(v2) + br[stream2].value <<= v2 & 63 buf[off+bufoff*stream] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 := single[br[stream2].peekByteFast()>>shift].entry buf[off+bufoff*stream2] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) - v = single[br[stream].peekByteFast()>>shift].entry + v = single[uint8(br[stream].value>>shift)].entry + v2 = single[uint8(br[stream2].value>>shift)].entry + br[stream].bitsRead += uint8(v) + br[stream].value <<= v & 63 + br[stream2].bitsRead += uint8(v2) + br[stream2].value <<= v2 & 63 buf[off+bufoff*stream+1] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 = single[br[stream2].peekByteFast()>>shift].entry buf[off+bufoff*stream2+1] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) - v = single[br[stream].peekByteFast()>>shift].entry + v = single[uint8(br[stream].value>>shift)].entry + v2 = single[uint8(br[stream2].value>>shift)].entry + br[stream].bitsRead += uint8(v) + br[stream].value <<= v & 63 + br[stream2].bitsRead += uint8(v2) + br[stream2].value <<= v2 & 63 buf[off+bufoff*stream+2] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 = single[br[stream2].peekByteFast()>>shift].entry buf[off+bufoff*stream2+2] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) - v = single[br[stream].peekByteFast()>>shift].entry + v = single[uint8(br[stream].value>>shift)].entry + v2 = single[uint8(br[stream2].value>>shift)].entry + br[stream].bitsRead += uint8(v) + br[stream].value <<= v & 63 + br[stream2].bitsRead += uint8(v2) + br[stream2].value <<= v2 & 63 buf[off+bufoff*stream+3] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 = single[br[stream2].peekByteFast()>>shift].entry buf[off+bufoff*stream2+3] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) } { @@ -1184,37 +1196,41 @@ func (d *Decoder) decompress4X8bitExactly(dst, src []byte) ([]byte, error) { br[stream].fillFast() br[stream2].fillFast() - v := single[br[stream].peekByteFast()>>shift].entry + v := single[uint8(br[stream].value>>shift)].entry + v2 := single[uint8(br[stream2].value>>shift)].entry + br[stream].bitsRead += uint8(v) + br[stream].value <<= v & 63 + br[stream2].bitsRead += uint8(v2) + br[stream2].value <<= v2 & 63 buf[off+bufoff*stream] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 := single[br[stream2].peekByteFast()>>shift].entry buf[off+bufoff*stream2] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) - v = single[br[stream].peekByteFast()>>shift].entry + v = single[uint8(br[stream].value>>shift)].entry + v2 = single[uint8(br[stream2].value>>shift)].entry + br[stream].bitsRead += uint8(v) + br[stream].value <<= v & 63 + br[stream2].bitsRead += uint8(v2) + br[stream2].value <<= v2 & 63 buf[off+bufoff*stream+1] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 = single[br[stream2].peekByteFast()>>shift].entry buf[off+bufoff*stream2+1] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) - v = single[br[stream].peekByteFast()>>shift].entry + v = single[uint8(br[stream].value>>shift)].entry + v2 = single[uint8(br[stream2].value>>shift)].entry + br[stream].bitsRead += uint8(v) + br[stream].value <<= v & 63 + br[stream2].bitsRead += uint8(v2) + br[stream2].value <<= v2 & 63 buf[off+bufoff*stream+2] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 = single[br[stream2].peekByteFast()>>shift].entry buf[off+bufoff*stream2+2] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) - v = single[br[stream].peekByteFast()>>shift].entry + v = single[uint8(br[stream].value>>shift)].entry + v2 = single[uint8(br[stream2].value>>shift)].entry + br[stream].bitsRead += uint8(v) + br[stream].value <<= v & 63 + br[stream2].bitsRead += uint8(v2) + br[stream2].value <<= v2 & 63 buf[off+bufoff*stream+3] = uint8(v >> 8) - br[stream].advance(uint8(v)) - - v2 = single[br[stream2].peekByteFast()>>shift].entry buf[off+bufoff*stream2+3] = uint8(v2 >> 8) - br[stream2].advance(uint8(v2)) } off += 4 @@ -1280,7 +1296,7 @@ func (d *Decoder) decompress4X8bitExactly(dst, src []byte) ([]byte, error) { } // Read value and increment offset. - v := single[br.peekByteFast()>>shift].entry + v := single[br.peekByteFast()].entry nBits := uint8(v) br.advance(nBits) bitsLeft -= int(nBits) diff --git a/vendor/github.com/klauspost/compress/zstd/bitreader.go b/vendor/github.com/klauspost/compress/zstd/bitreader.go index 85445853715..753d17df634 100644 --- a/vendor/github.com/klauspost/compress/zstd/bitreader.go +++ b/vendor/github.com/klauspost/compress/zstd/bitreader.go @@ -50,16 +50,23 @@ func (b *bitReader) getBits(n uint8) int { if n == 0 /*|| b.bitsRead >= 64 */ { return 0 } - return b.getBitsFast(n) + return int(b.get32BitsFast(n)) } -// getBitsFast requires that at least one bit is requested every time. +// get32BitsFast requires that at least one bit is requested every time. // There are no checks if the buffer is filled. -func (b *bitReader) getBitsFast(n uint8) int { +func (b *bitReader) get32BitsFast(n uint8) uint32 { const regMask = 64 - 1 v := uint32((b.value << (b.bitsRead & regMask)) >> ((regMask + 1 - n) & regMask)) b.bitsRead += n - return int(v) + return v +} + +func (b *bitReader) get16BitsFast(n uint8) uint16 { + const regMask = 64 - 1 + v := uint16((b.value << (b.bitsRead & regMask)) >> ((regMask + 1 - n) & regMask)) + b.bitsRead += n + return v } // fillFast() will make sure at least 32 bits are available. diff --git a/vendor/github.com/klauspost/compress/zstd/bitwriter.go b/vendor/github.com/klauspost/compress/zstd/bitwriter.go index 303ae90f944..b3661828509 100644 --- a/vendor/github.com/klauspost/compress/zstd/bitwriter.go +++ b/vendor/github.com/klauspost/compress/zstd/bitwriter.go @@ -38,7 +38,7 @@ func (b *bitWriter) addBits16NC(value uint16, bits uint8) { b.nBits += bits } -// addBits32NC will add up to 32 bits. +// addBits32NC will add up to 31 bits. // It will not check if there is space for them, // so the caller must ensure that it has flushed recently. func (b *bitWriter) addBits32NC(value uint32, bits uint8) { @@ -46,6 +46,26 @@ func (b *bitWriter) addBits32NC(value uint32, bits uint8) { b.nBits += bits } +// addBits64NC will add up to 64 bits. +// There must be space for 32 bits. +func (b *bitWriter) addBits64NC(value uint64, bits uint8) { + if bits <= 31 { + b.addBits32Clean(uint32(value), bits) + return + } + b.addBits32Clean(uint32(value), 32) + b.flush32() + b.addBits32Clean(uint32(value>>32), bits-32) +} + +// addBits32Clean will add up to 32 bits. +// It will not check if there is space for them. +// The input must not contain more bits than specified. +func (b *bitWriter) addBits32Clean(value uint32, bits uint8) { + b.bitContainer |= uint64(value) << (b.nBits & 63) + b.nBits += bits +} + // addBits16Clean will add up to 16 bits. value may not contain more set bits than indicated. // It will not check if there is space for them, so the caller must ensure that it has flushed recently. func (b *bitWriter) addBits16Clean(value uint16, bits uint8) { diff --git a/vendor/github.com/klauspost/compress/zstd/blockdec.go b/vendor/github.com/klauspost/compress/zstd/blockdec.go index 8a98c4562e0..dc587b2c949 100644 --- a/vendor/github.com/klauspost/compress/zstd/blockdec.go +++ b/vendor/github.com/klauspost/compress/zstd/blockdec.go @@ -76,12 +76,11 @@ type blockDec struct { // Window size of the block. WindowSize uint64 - history chan *history - input chan struct{} - result chan decodeOutput - sequenceBuf []seq - err error - decWG sync.WaitGroup + history chan *history + input chan struct{} + result chan decodeOutput + err error + decWG sync.WaitGroup // Frame to use for singlethreaded decoding. // Should not be used by the decoder itself since parent may be another frame. @@ -512,18 +511,7 @@ func (b *blockDec) decodeCompressed(hist *history) error { nSeqs = 0x7f00 + int(in[1]) + (int(in[2]) << 8) in = in[3:] } - // Allocate sequences - if cap(b.sequenceBuf) < nSeqs { - if b.lowMem { - b.sequenceBuf = make([]seq, nSeqs) - } else { - // Allocate max - b.sequenceBuf = make([]seq, nSeqs, maxSequences) - } - } else { - // Reuse buffer - b.sequenceBuf = b.sequenceBuf[:nSeqs] - } + var seqs = &sequenceDecs{} if nSeqs > 0 { if len(in) < 1 { diff --git a/vendor/github.com/klauspost/compress/zstd/blockenc.go b/vendor/github.com/klauspost/compress/zstd/blockenc.go index 3df185ee465..12e8f6f0b61 100644 --- a/vendor/github.com/klauspost/compress/zstd/blockenc.go +++ b/vendor/github.com/klauspost/compress/zstd/blockenc.go @@ -51,7 +51,7 @@ func (b *blockEnc) init() { if cap(b.literals) < maxCompressedBlockSize { b.literals = make([]byte, 0, maxCompressedBlockSize) } - const defSeqs = 200 + const defSeqs = 2000 if cap(b.sequences) < defSeqs { b.sequences = make([]seq, 0, defSeqs) } @@ -426,7 +426,7 @@ func fuzzFseEncoder(data []byte) int { return 0 } enc := fseEncoder{} - hist := enc.Histogram()[:256] + hist := enc.Histogram() maxSym := uint8(0) for i, v := range data { v = v & 63 @@ -722,52 +722,53 @@ func (b *blockEnc) encode(org []byte, raw, rawAllLits bool) error { println("Encoded seq", seq, s, "codes:", s.llCode, s.mlCode, s.ofCode, "states:", ll.state, ml.state, of.state, "bits:", llB, mlB, ofB) } seq-- - if llEnc.maxBits+mlEnc.maxBits+ofEnc.maxBits <= 32 { - // No need to flush (common) - for seq >= 0 { - s = b.sequences[seq] - wr.flush32() - llB, ofB, mlB := llTT[s.llCode], ofTT[s.ofCode], mlTT[s.mlCode] - // tabelog max is 8 for all. - of.encode(ofB) - ml.encode(mlB) - ll.encode(llB) - wr.flush32() - - // We checked that all can stay within 32 bits - wr.addBits32NC(s.litLen, llB.outBits) - wr.addBits32NC(s.matchLen, mlB.outBits) - wr.addBits32NC(s.offset, ofB.outBits) - - if debugSequences { - println("Encoded seq", seq, s) - } - - seq-- - } - } else { - for seq >= 0 { - s = b.sequences[seq] - wr.flush32() - llB, ofB, mlB := llTT[s.llCode], ofTT[s.ofCode], mlTT[s.mlCode] - // tabelog max is below 8 for each. - of.encode(ofB) - ml.encode(mlB) - ll.encode(llB) - wr.flush32() - - // ml+ll = max 32 bits total - wr.addBits32NC(s.litLen, llB.outBits) - wr.addBits32NC(s.matchLen, mlB.outBits) - wr.flush32() - wr.addBits32NC(s.offset, ofB.outBits) - - if debugSequences { - println("Encoded seq", seq, s) - } - - seq-- - } + // Store sequences in reverse... + for seq >= 0 { + s = b.sequences[seq] + + ofB := ofTT[s.ofCode] + wr.flush32() // tablelog max is below 8 for each, so it will fill max 24 bits. + //of.encode(ofB) + nbBitsOut := (uint32(of.state) + ofB.deltaNbBits) >> 16 + dstState := int32(of.state>>(nbBitsOut&15)) + int32(ofB.deltaFindState) + wr.addBits16NC(of.state, uint8(nbBitsOut)) + of.state = of.stateTable[dstState] + + // Accumulate extra bits. + outBits := ofB.outBits & 31 + extraBits := uint64(s.offset & bitMask32[outBits]) + extraBitsN := outBits + + mlB := mlTT[s.mlCode] + //ml.encode(mlB) + nbBitsOut = (uint32(ml.state) + mlB.deltaNbBits) >> 16 + dstState = int32(ml.state>>(nbBitsOut&15)) + int32(mlB.deltaFindState) + wr.addBits16NC(ml.state, uint8(nbBitsOut)) + ml.state = ml.stateTable[dstState] + + outBits = mlB.outBits & 31 + extraBits = extraBits<> 16 + dstState = int32(ll.state>>(nbBitsOut&15)) + int32(llB.deltaFindState) + wr.addBits16NC(ll.state, uint8(nbBitsOut)) + ll.state = ll.stateTable[dstState] + + outBits = llB.outBits & 31 + extraBits = extraBits< math.MaxUint16 { panic("can only encode up to 64K sequences") } // No bounds checks after here: - llH := b.coders.llEnc.Histogram()[:256] - ofH := b.coders.ofEnc.Histogram()[:256] - mlH := b.coders.mlEnc.Histogram()[:256] + llH := b.coders.llEnc.Histogram() + ofH := b.coders.ofEnc.Histogram() + mlH := b.coders.mlEnc.Histogram() for i := range llH { llH[i] = 0 } @@ -820,7 +820,8 @@ func (b *blockEnc) genCodes() { } var llMax, ofMax, mlMax uint8 - for i, seq := range b.sequences { + for i := range b.sequences { + seq := &b.sequences[i] v := llCode(seq.litLen) seq.llCode = v llH[v]++ @@ -844,7 +845,6 @@ func (b *blockEnc) genCodes() { panic(fmt.Errorf("mlMax > maxMatchLengthSymbol (%d), matchlen: %d", mlMax, seq.matchLen)) } } - b.sequences[i] = seq } maxCount := func(a []uint32) int { var max uint32 diff --git a/vendor/github.com/klauspost/compress/zstd/decodeheader.go b/vendor/github.com/klauspost/compress/zstd/decodeheader.go index 69736e8d4bb..5022e71c836 100644 --- a/vendor/github.com/klauspost/compress/zstd/decodeheader.go +++ b/vendor/github.com/klauspost/compress/zstd/decodeheader.go @@ -5,6 +5,7 @@ package zstd import ( "bytes" + "encoding/binary" "errors" "io" ) @@ -15,18 +16,50 @@ const HeaderMaxSize = 14 + 3 // Header contains information about the first frame and block within that. type Header struct { - // Window Size the window of data to keep while decoding. - // Will only be set if HasFCS is false. - WindowSize uint64 + // SingleSegment specifies whether the data is to be decompressed into a + // single contiguous memory segment. + // It implies that WindowSize is invalid and that FrameContentSize is valid. + SingleSegment bool - // Frame content size. - // Expected size of the entire frame. - FrameContentSize uint64 + // WindowSize is the window of data to keep while decoding. + // Will only be set if SingleSegment is false. + WindowSize uint64 // Dictionary ID. // If 0, no dictionary. DictionaryID uint32 + // HasFCS specifies whether FrameContentSize has a valid value. + HasFCS bool + + // FrameContentSize is the expected uncompressed size of the entire frame. + FrameContentSize uint64 + + // Skippable will be true if the frame is meant to be skipped. + // This implies that FirstBlock.OK is false. + Skippable bool + + // SkippableID is the user-specific ID for the skippable frame. + // Valid values are between 0 to 15, inclusive. + SkippableID int + + // SkippableSize is the length of the user data to skip following + // the header. + SkippableSize uint32 + + // HeaderSize is the raw size of the frame header. + // + // For normal frames, it includes the size of the magic number and + // the size of the header (per section 3.1.1.1). + // It does not include the size for any data blocks (section 3.1.1.2) nor + // the size for the trailing content checksum. + // + // For skippable frames, this counts the size of the magic number + // along with the size of the size field of the payload. + // It does not include the size of the skippable payload itself. + // The total frame size is the HeaderSize plus the SkippableSize. + HeaderSize int + // First block information. FirstBlock struct { // OK will be set if first block could be decoded. @@ -51,17 +84,9 @@ type Header struct { CompressedSize int } - // Skippable will be true if the frame is meant to be skipped. - // No other information will be populated. - Skippable bool - // If set there is a checksum present for the block content. + // The checksum field at the end is always 4 bytes long. HasCheckSum bool - - // If this is true FrameContentSize will have a valid value - HasFCS bool - - SingleSegment bool } // Decode the header from the beginning of the stream. @@ -71,39 +96,46 @@ type Header struct { // If there isn't enough input, io.ErrUnexpectedEOF is returned. // The FirstBlock.OK will indicate if enough information was available to decode the first block header. func (h *Header) Decode(in []byte) error { + *h = Header{} if len(in) < 4 { return io.ErrUnexpectedEOF } + h.HeaderSize += 4 b, in := in[:4], in[4:] if !bytes.Equal(b, frameMagic) { if !bytes.Equal(b[1:4], skippableFrameMagic) || b[0]&0xf0 != 0x50 { return ErrMagicMismatch } - *h = Header{Skippable: true} + if len(in) < 4 { + return io.ErrUnexpectedEOF + } + h.HeaderSize += 4 + h.Skippable = true + h.SkippableID = int(b[0] & 0xf) + h.SkippableSize = binary.LittleEndian.Uint32(in) return nil } + + // Read Window_Descriptor + // https://github.com/facebook/zstd/blob/dev/doc/zstd_compression_format.md#window_descriptor if len(in) < 1 { return io.ErrUnexpectedEOF } - - // Clear output - *h = Header{} fhd, in := in[0], in[1:] + h.HeaderSize++ h.SingleSegment = fhd&(1<<5) != 0 h.HasCheckSum = fhd&(1<<2) != 0 - if fhd&(1<<3) != 0 { return errors.New("reserved bit set on frame header") } - // Read Window_Descriptor - // https://github.com/facebook/zstd/blob/dev/doc/zstd_compression_format.md#window_descriptor if !h.SingleSegment { if len(in) < 1 { return io.ErrUnexpectedEOF } var wd byte wd, in = in[0], in[1:] + h.HeaderSize++ windowLog := 10 + (wd >> 3) windowBase := uint64(1) << windowLog windowAdd := (windowBase / 8) * uint64(wd&0x7) @@ -120,9 +152,7 @@ func (h *Header) Decode(in []byte) error { return io.ErrUnexpectedEOF } b, in = in[:size], in[size:] - if b == nil { - return io.ErrUnexpectedEOF - } + h.HeaderSize += int(size) switch size { case 1: h.DictionaryID = uint32(b[0]) @@ -152,9 +182,7 @@ func (h *Header) Decode(in []byte) error { return io.ErrUnexpectedEOF } b, in = in[:fcsSize], in[fcsSize:] - if b == nil { - return io.ErrUnexpectedEOF - } + h.HeaderSize += int(fcsSize) switch fcsSize { case 1: h.FrameContentSize = uint64(b[0]) diff --git a/vendor/github.com/klauspost/compress/zstd/enc_base.go b/vendor/github.com/klauspost/compress/zstd/enc_base.go index 295cd602a42..15ae8ee8077 100644 --- a/vendor/github.com/klauspost/compress/zstd/enc_base.go +++ b/vendor/github.com/klauspost/compress/zstd/enc_base.go @@ -108,11 +108,6 @@ func (e *fastBase) UseBlock(enc *blockEnc) { e.blk = enc } -func (e *fastBase) matchlenNoHist(s, t int32, src []byte) int32 { - // Extend the match to be as long as possible. - return int32(matchLen(src[s:], src[t:])) -} - func (e *fastBase) matchlen(s, t int32, src []byte) int32 { if debugAsserts { if s < 0 { @@ -131,9 +126,24 @@ func (e *fastBase) matchlen(s, t int32, src []byte) int32 { panic(fmt.Sprintf("len(src)-s (%d) > maxCompressedBlockSize (%d)", len(src)-int(s), maxCompressedBlockSize)) } } + a := src[s:] + b := src[t:] + b = b[:len(a)] + end := int32((len(a) >> 3) << 3) + for i := int32(0); i < end; i += 8 { + if diff := load6432(a, i) ^ load6432(b, i); diff != 0 { + return i + int32(bits.TrailingZeros64(diff)>>3) + } + } - // Extend the match to be as long as possible. - return int32(matchLen(src[s:], src[t:])) + a = a[end:] + b = b[end:] + for i := range a { + if a[i] != b[i] { + return int32(i) + end + } + } + return int32(len(a)) + end } // Reset the encoding table. diff --git a/vendor/github.com/klauspost/compress/zstd/enc_fast.go b/vendor/github.com/klauspost/compress/zstd/enc_fast.go index f2502629bc5..5f08a283023 100644 --- a/vendor/github.com/klauspost/compress/zstd/enc_fast.go +++ b/vendor/github.com/klauspost/compress/zstd/enc_fast.go @@ -6,8 +6,6 @@ package zstd import ( "fmt" - "math" - "math/bits" ) const ( @@ -136,20 +134,7 @@ encodeLoop: // Consider history as well. var seq seq var length int32 - // length = 4 + e.matchlen(s+6, repIndex+4, src) - { - a := src[s+6:] - b := src[repIndex+4:] - endI := len(a) & (math.MaxInt32 - 7) - length = int32(endI) + 4 - for i := 0; i < endI; i += 8 { - if diff := load64(a, i) ^ load64(b, i); diff != 0 { - length = int32(i+bits.TrailingZeros64(diff)>>3) + 4 - break - } - } - } - + length = 4 + e.matchlen(s+6, repIndex+4, src) seq.matchLen = uint32(length - zstdMinMatch) // We might be able to match backwards. @@ -236,20 +221,7 @@ encodeLoop: } // Extend the 4-byte match as long as possible. - //l := e.matchlen(s+4, t+4, src) + 4 - var l int32 - { - a := src[s+4:] - b := src[t+4:] - endI := len(a) & (math.MaxInt32 - 7) - l = int32(endI) + 4 - for i := 0; i < endI; i += 8 { - if diff := load64(a, i) ^ load64(b, i); diff != 0 { - l = int32(i+bits.TrailingZeros64(diff)>>3) + 4 - break - } - } - } + l := e.matchlen(s+4, t+4, src) + 4 // Extend backwards tMin := s - e.maxMatchOff @@ -286,20 +258,7 @@ encodeLoop: if o2 := s - offset2; canRepeat && load3232(src, o2) == uint32(cv) { // We have at least 4 byte match. // No need to check backwards. We come straight from a match - //l := 4 + e.matchlen(s+4, o2+4, src) - var l int32 - { - a := src[s+4:] - b := src[o2+4:] - endI := len(a) & (math.MaxInt32 - 7) - l = int32(endI) + 4 - for i := 0; i < endI; i += 8 { - if diff := load64(a, i) ^ load64(b, i); diff != 0 { - l = int32(i+bits.TrailingZeros64(diff)>>3) + 4 - break - } - } - } + l := 4 + e.matchlen(s+4, o2+4, src) // Store this, since we have it. nextHash := hashLen(cv, hashLog, tableFastHashLen) @@ -418,21 +377,7 @@ encodeLoop: if len(blk.sequences) > 2 && load3232(src, repIndex) == uint32(cv>>16) { // Consider history as well. var seq seq - // length := 4 + e.matchlen(s+6, repIndex+4, src) - // length := 4 + int32(matchLen(src[s+6:], src[repIndex+4:])) - var length int32 - { - a := src[s+6:] - b := src[repIndex+4:] - endI := len(a) & (math.MaxInt32 - 7) - length = int32(endI) + 4 - for i := 0; i < endI; i += 8 { - if diff := load64(a, i) ^ load64(b, i); diff != 0 { - length = int32(i+bits.TrailingZeros64(diff)>>3) + 4 - break - } - } - } + length := 4 + e.matchlen(s+6, repIndex+4, src) seq.matchLen = uint32(length - zstdMinMatch) @@ -522,21 +467,7 @@ encodeLoop: panic(fmt.Sprintf("t (%d) < 0 ", t)) } // Extend the 4-byte match as long as possible. - //l := e.matchlenNoHist(s+4, t+4, src) + 4 - // l := int32(matchLen(src[s+4:], src[t+4:])) + 4 - var l int32 - { - a := src[s+4:] - b := src[t+4:] - endI := len(a) & (math.MaxInt32 - 7) - l = int32(endI) + 4 - for i := 0; i < endI; i += 8 { - if diff := load64(a, i) ^ load64(b, i); diff != 0 { - l = int32(i+bits.TrailingZeros64(diff)>>3) + 4 - break - } - } - } + l := e.matchlen(s+4, t+4, src) + 4 // Extend backwards tMin := s - e.maxMatchOff @@ -573,21 +504,7 @@ encodeLoop: if o2 := s - offset2; len(blk.sequences) > 2 && load3232(src, o2) == uint32(cv) { // We have at least 4 byte match. // No need to check backwards. We come straight from a match - //l := 4 + e.matchlenNoHist(s+4, o2+4, src) - // l := 4 + int32(matchLen(src[s+4:], src[o2+4:])) - var l int32 - { - a := src[s+4:] - b := src[o2+4:] - endI := len(a) & (math.MaxInt32 - 7) - l = int32(endI) + 4 - for i := 0; i < endI; i += 8 { - if diff := load64(a, i) ^ load64(b, i); diff != 0 { - l = int32(i+bits.TrailingZeros64(diff)>>3) + 4 - break - } - } - } + l := 4 + e.matchlen(s+4, o2+4, src) // Store this, since we have it. nextHash := hashLen(cv, hashLog, tableFastHashLen) @@ -731,19 +648,7 @@ encodeLoop: // Consider history as well. var seq seq var length int32 - // length = 4 + e.matchlen(s+6, repIndex+4, src) - { - a := src[s+6:] - b := src[repIndex+4:] - endI := len(a) & (math.MaxInt32 - 7) - length = int32(endI) + 4 - for i := 0; i < endI; i += 8 { - if diff := load64(a, i) ^ load64(b, i); diff != 0 { - length = int32(i+bits.TrailingZeros64(diff)>>3) + 4 - break - } - } - } + length = 4 + e.matchlen(s+6, repIndex+4, src) seq.matchLen = uint32(length - zstdMinMatch) @@ -831,20 +736,7 @@ encodeLoop: } // Extend the 4-byte match as long as possible. - //l := e.matchlen(s+4, t+4, src) + 4 - var l int32 - { - a := src[s+4:] - b := src[t+4:] - endI := len(a) & (math.MaxInt32 - 7) - l = int32(endI) + 4 - for i := 0; i < endI; i += 8 { - if diff := load64(a, i) ^ load64(b, i); diff != 0 { - l = int32(i+bits.TrailingZeros64(diff)>>3) + 4 - break - } - } - } + l := e.matchlen(s+4, t+4, src) + 4 // Extend backwards tMin := s - e.maxMatchOff @@ -881,20 +773,7 @@ encodeLoop: if o2 := s - offset2; canRepeat && load3232(src, o2) == uint32(cv) { // We have at least 4 byte match. // No need to check backwards. We come straight from a match - //l := 4 + e.matchlen(s+4, o2+4, src) - var l int32 - { - a := src[s+4:] - b := src[o2+4:] - endI := len(a) & (math.MaxInt32 - 7) - l = int32(endI) + 4 - for i := 0; i < endI; i += 8 { - if diff := load64(a, i) ^ load64(b, i); diff != 0 { - l = int32(i+bits.TrailingZeros64(diff)>>3) + 4 - break - } - } - } + l := 4 + e.matchlen(s+4, o2+4, src) // Store this, since we have it. nextHash := hashLen(cv, hashLog, tableFastHashLen) diff --git a/vendor/github.com/klauspost/compress/zstd/encoder_options.go b/vendor/github.com/klauspost/compress/zstd/encoder_options.go index 7d29e1d689e..5f2e1d020ee 100644 --- a/vendor/github.com/klauspost/compress/zstd/encoder_options.go +++ b/vendor/github.com/klauspost/compress/zstd/encoder_options.go @@ -24,6 +24,7 @@ type encoderOptions struct { allLitEntropy bool customWindow bool customALEntropy bool + customBlockSize bool lowMem bool dict *dict } @@ -33,7 +34,7 @@ func (o *encoderOptions) setDefault() { concurrent: runtime.GOMAXPROCS(0), crc: true, single: nil, - blockSize: 1 << 16, + blockSize: maxCompressedBlockSize, windowSize: 8 << 20, level: SpeedDefault, allLitEntropy: true, @@ -106,6 +107,7 @@ func WithWindowSize(n int) EOption { o.customWindow = true if o.blockSize > o.windowSize { o.blockSize = o.windowSize + o.customBlockSize = true } return nil } @@ -188,10 +190,9 @@ func EncoderLevelFromZstd(level int) EncoderLevel { return SpeedDefault case level >= 6 && level < 10: return SpeedBetterCompression - case level >= 10: + default: return SpeedBestCompression } - return SpeedDefault } // String provides a string representation of the compression level. @@ -222,6 +223,9 @@ func WithEncoderLevel(l EncoderLevel) EOption { switch o.level { case SpeedFastest: o.windowSize = 4 << 20 + if !o.customBlockSize { + o.blockSize = 1 << 16 + } case SpeedDefault: o.windowSize = 8 << 20 case SpeedBetterCompression: diff --git a/vendor/github.com/klauspost/compress/zstd/fse_decoder.go b/vendor/github.com/klauspost/compress/zstd/fse_decoder.go index e6d3d49b39c..bb3d4fd6c31 100644 --- a/vendor/github.com/klauspost/compress/zstd/fse_decoder.go +++ b/vendor/github.com/klauspost/compress/zstd/fse_decoder.go @@ -379,7 +379,7 @@ func (s decSymbol) final() (int, uint8) { // This can only be used if no symbols are 0 bits. // At least tablelog bits must be available in the bit reader. func (s *fseState) nextFast(br *bitReader) (uint32, uint8) { - lowBits := uint16(br.getBitsFast(s.state.nbBits())) + lowBits := br.get16BitsFast(s.state.nbBits()) s.state = s.dt[s.state.newState()+lowBits] return s.state.baseline(), s.state.addBits() } diff --git a/vendor/github.com/klauspost/compress/zstd/fse_encoder.go b/vendor/github.com/klauspost/compress/zstd/fse_encoder.go index b4757ee3f03..5442061b18d 100644 --- a/vendor/github.com/klauspost/compress/zstd/fse_encoder.go +++ b/vendor/github.com/klauspost/compress/zstd/fse_encoder.go @@ -62,9 +62,8 @@ func (s symbolTransform) String() string { // To indicate that you have populated the histogram call HistogramFinished // with the value of the highest populated symbol, as well as the number of entries // in the most populated entry. These are accepted at face value. -// The returned slice will always be length 256. -func (s *fseEncoder) Histogram() []uint32 { - return s.count[:] +func (s *fseEncoder) Histogram() *[256]uint32 { + return &s.count } // HistogramFinished can be called to indicate that the histogram has been populated. diff --git a/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_amd64.s b/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_amd64.s index be8db5bf796..cea17856197 100644 --- a/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_amd64.s +++ b/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_amd64.s @@ -1,6 +1,7 @@ // +build !appengine // +build gc // +build !purego +// +build !noasm #include "textflag.h" diff --git a/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_arm64.s b/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_arm64.s new file mode 100644 index 00000000000..4d64a17d69c --- /dev/null +++ b/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_arm64.s @@ -0,0 +1,186 @@ +// +build gc,!purego,!noasm + +#include "textflag.h" + +// Register allocation. +#define digest R1 +#define h R2 // Return value. +#define p R3 // Input pointer. +#define len R4 +#define nblocks R5 // len / 32. +#define prime1 R7 +#define prime2 R8 +#define prime3 R9 +#define prime4 R10 +#define prime5 R11 +#define v1 R12 +#define v2 R13 +#define v3 R14 +#define v4 R15 +#define x1 R20 +#define x2 R21 +#define x3 R22 +#define x4 R23 + +#define round(acc, x) \ + MADD prime2, acc, x, acc \ + ROR $64-31, acc \ + MUL prime1, acc \ + +// x = round(0, x). +#define round0(x) \ + MUL prime2, x \ + ROR $64-31, x \ + MUL prime1, x \ + +#define mergeRound(x) \ + round0(x) \ + EOR x, h \ + MADD h, prime4, prime1, h \ + +// Update v[1-4] with 32-byte blocks. Assumes len >= 32. +#define blocksLoop() \ + LSR $5, len, nblocks \ + PCALIGN $16 \ + loop: \ + LDP.P 32(p), (x1, x2) \ + round(v1, x1) \ + LDP -16(p), (x3, x4) \ + round(v2, x2) \ + SUB $1, nblocks \ + round(v3, x3) \ + round(v4, x4) \ + CBNZ nblocks, loop \ + +// The primes are repeated here to ensure that they're stored +// in a contiguous array, so we can load them with LDP. +DATA primes<> +0(SB)/8, $11400714785074694791 +DATA primes<> +8(SB)/8, $14029467366897019727 +DATA primes<>+16(SB)/8, $1609587929392839161 +DATA primes<>+24(SB)/8, $9650029242287828579 +DATA primes<>+32(SB)/8, $2870177450012600261 +GLOBL primes<>(SB), NOPTR+RODATA, $40 + +// func Sum64(b []byte) uint64 +TEXT ·Sum64(SB), NOFRAME+NOSPLIT, $0-32 + LDP b_base+0(FP), (p, len) + + LDP primes<> +0(SB), (prime1, prime2) + LDP primes<>+16(SB), (prime3, prime4) + MOVD primes<>+32(SB), prime5 + + CMP $32, len + CSEL LO, prime5, ZR, h // if len < 32 { h = prime5 } else { h = 0 } + BLO afterLoop + + ADD prime1, prime2, v1 + MOVD prime2, v2 + MOVD $0, v3 + NEG prime1, v4 + + blocksLoop() + + ROR $64-1, v1, x1 + ROR $64-7, v2, x2 + ADD x1, x2 + ROR $64-12, v3, x3 + ROR $64-18, v4, x4 + ADD x3, x4 + ADD x2, x4, h + + mergeRound(v1) + mergeRound(v2) + mergeRound(v3) + mergeRound(v4) + +afterLoop: + ADD len, h + + TBZ $4, len, try8 + LDP.P 16(p), (x1, x2) + + round0(x1) + ROR $64-27, h + EOR x1 @> 64-27, h, h + MADD h, prime4, prime1, h + + round0(x2) + ROR $64-27, h + EOR x2 @> 64-27, h + MADD h, prime4, prime1, h + +try8: + TBZ $3, len, try4 + MOVD.P 8(p), x1 + + round0(x1) + ROR $64-27, h + EOR x1 @> 64-27, h + MADD h, prime4, prime1, h + +try4: + TBZ $2, len, try2 + MOVWU.P 4(p), x2 + + MUL prime1, x2 + ROR $64-23, h + EOR x2 @> 64-23, h + MADD h, prime3, prime2, h + +try2: + TBZ $1, len, try1 + MOVHU.P 2(p), x3 + AND $255, x3, x1 + LSR $8, x3, x2 + + MUL prime5, x1 + ROR $64-11, h + EOR x1 @> 64-11, h + MUL prime1, h + + MUL prime5, x2 + ROR $64-11, h + EOR x2 @> 64-11, h + MUL prime1, h + +try1: + TBZ $0, len, end + MOVBU (p), x4 + + MUL prime5, x4 + ROR $64-11, h + EOR x4 @> 64-11, h + MUL prime1, h + +end: + EOR h >> 33, h + MUL prime2, h + EOR h >> 29, h + MUL prime3, h + EOR h >> 32, h + + MOVD h, ret+24(FP) + RET + +// func writeBlocks(d *Digest, b []byte) int +// +// Assumes len(b) >= 32. +TEXT ·writeBlocks(SB), NOFRAME+NOSPLIT, $0-40 + LDP primes<>(SB), (prime1, prime2) + + // Load state. Assume v[1-4] are stored contiguously. + MOVD d+0(FP), digest + LDP 0(digest), (v1, v2) + LDP 16(digest), (v3, v4) + + LDP b_base+8(FP), (p, len) + + blocksLoop() + + // Store updated state. + STP (v1, v2), 0(digest) + STP (v3, v4), 16(digest) + + BIC $31, len + MOVD len, ret+32(FP) + RET diff --git a/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_amd64.go b/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_asm.go similarity index 51% rename from vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_amd64.go rename to vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_asm.go index 0ae847f75b0..1a1fac9c261 100644 --- a/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_amd64.go +++ b/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_asm.go @@ -1,5 +1,9 @@ -//go:build !appengine && gc && !purego -// +build !appengine,gc,!purego +//go:build (amd64 || arm64) && !appengine && gc && !purego && !noasm +// +build amd64 arm64 +// +build !appengine +// +build gc +// +build !purego +// +build !noasm package xxhash diff --git a/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_other.go b/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_other.go index 1f52f296e71..209cb4a999c 100644 --- a/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_other.go +++ b/vendor/github.com/klauspost/compress/zstd/internal/xxhash/xxhash_other.go @@ -1,5 +1,5 @@ -//go:build !amd64 || appengine || !gc || purego -// +build !amd64 appengine !gc purego +//go:build (!amd64 && !arm64) || appengine || !gc || purego || noasm +// +build !amd64,!arm64 appengine !gc purego noasm package xxhash diff --git a/vendor/github.com/klauspost/compress/zstd/seqdec.go b/vendor/github.com/klauspost/compress/zstd/seqdec.go index 1dd39e63b7e..bc731e4cb69 100644 --- a/vendor/github.com/klauspost/compress/zstd/seqdec.go +++ b/vendor/github.com/klauspost/compress/zstd/seqdec.go @@ -278,7 +278,7 @@ func (s *sequenceDecs) decode(seqs int, br *bitReader, hist []byte) error { mlState = mlTable[mlState.newState()&maxTableMask] ofState = ofTable[ofState.newState()&maxTableMask] } else { - bits := br.getBitsFast(nBits) + bits := br.get32BitsFast(nBits) lowBits := uint16(bits >> ((ofState.nbBits() + mlState.nbBits()) & 31)) llState = llTable[(llState.newState()+lowBits)&maxTableMask] @@ -326,7 +326,7 @@ func (s *sequenceDecs) updateAlt(br *bitReader) { s.offsets.state.state = s.offsets.state.dt[c.newState()] return } - bits := br.getBitsFast(nBits) + bits := br.get32BitsFast(nBits) lowBits := uint16(bits >> ((c.nbBits() + b.nbBits()) & 31)) s.litLengths.state.state = s.litLengths.state.dt[a.newState()+lowBits] diff --git a/vendor/google.golang.org/grpc/attributes/attributes.go b/vendor/google.golang.org/grpc/attributes/attributes.go index 6ff2792ee4f..ae13ddac14e 100644 --- a/vendor/google.golang.org/grpc/attributes/attributes.go +++ b/vendor/google.golang.org/grpc/attributes/attributes.go @@ -69,7 +69,9 @@ func (a *Attributes) Value(key interface{}) interface{} { // bool' is implemented for a value in the attributes, it is called to // determine if the value matches the one stored in the other attributes. If // Equal is not implemented, standard equality is used to determine if the two -// values are equal. +// values are equal. Note that some types (e.g. maps) aren't comparable by +// default, so they must be wrapped in a struct, or in an alias type, with Equal +// defined. func (a *Attributes) Equal(o *Attributes) bool { if a == nil && o == nil { return true diff --git a/vendor/google.golang.org/grpc/credentials/insecure/insecure.go b/vendor/google.golang.org/grpc/credentials/insecure/insecure.go index 22a8f996a68..4fbed12565f 100644 --- a/vendor/google.golang.org/grpc/credentials/insecure/insecure.go +++ b/vendor/google.golang.org/grpc/credentials/insecure/insecure.go @@ -18,11 +18,6 @@ // Package insecure provides an implementation of the // credentials.TransportCredentials interface which disables transport security. -// -// Experimental -// -// Notice: This package is EXPERIMENTAL and may be changed or removed in a -// later release. package insecure import ( diff --git a/vendor/google.golang.org/grpc/dialoptions.go b/vendor/google.golang.org/grpc/dialoptions.go index 063f1e903c0..c4bf09f9e94 100644 --- a/vendor/google.golang.org/grpc/dialoptions.go +++ b/vendor/google.golang.org/grpc/dialoptions.go @@ -272,7 +272,7 @@ func withBackoff(bs internalbackoff.Strategy) DialOption { }) } -// WithBlock returns a DialOption which makes caller of Dial blocks until the +// WithBlock returns a DialOption which makes callers of Dial block until the // underlying connection is up. Without this, Dial returns immediately and // connecting the server happens in background. func WithBlock() DialOption { @@ -304,7 +304,7 @@ func WithReturnConnectionError() DialOption { // WithCredentialsBundle or WithPerRPCCredentials) which require transport // security is incompatible and will cause grpc.Dial() to fail. // -// Deprecated: use insecure.NewCredentials() instead. +// Deprecated: use WithTransportCredentials and insecure.NewCredentials() instead. // Will be supported throughout 1.x. func WithInsecure() DialOption { return newFuncDialOption(func(o *dialOptions) { diff --git a/vendor/google.golang.org/grpc/grpclog/loggerv2.go b/vendor/google.golang.org/grpc/grpclog/loggerv2.go index 34098bb8eb5..7c1f6640903 100644 --- a/vendor/google.golang.org/grpc/grpclog/loggerv2.go +++ b/vendor/google.golang.org/grpc/grpclog/loggerv2.go @@ -248,12 +248,12 @@ func (g *loggerT) V(l int) bool { // later release. type DepthLoggerV2 interface { LoggerV2 - // InfoDepth logs to INFO log at the specified depth. Arguments are handled in the manner of fmt.Print. + // InfoDepth logs to INFO log at the specified depth. Arguments are handled in the manner of fmt.Println. InfoDepth(depth int, args ...interface{}) - // WarningDepth logs to WARNING log at the specified depth. Arguments are handled in the manner of fmt.Print. + // WarningDepth logs to WARNING log at the specified depth. Arguments are handled in the manner of fmt.Println. WarningDepth(depth int, args ...interface{}) - // ErrorDetph logs to ERROR log at the specified depth. Arguments are handled in the manner of fmt.Print. + // ErrorDepth logs to ERROR log at the specified depth. Arguments are handled in the manner of fmt.Println. ErrorDepth(depth int, args ...interface{}) - // FatalDepth logs to FATAL log at the specified depth. Arguments are handled in the manner of fmt.Print. + // FatalDepth logs to FATAL log at the specified depth. Arguments are handled in the manner of fmt.Println. FatalDepth(depth int, args ...interface{}) } diff --git a/vendor/google.golang.org/grpc/internal/envconfig/xds.go b/vendor/google.golang.org/grpc/internal/envconfig/xds.go index 93522d716d1..9bad03cec64 100644 --- a/vendor/google.golang.org/grpc/internal/envconfig/xds.go +++ b/vendor/google.golang.org/grpc/internal/envconfig/xds.go @@ -42,6 +42,7 @@ const ( aggregateAndDNSSupportEnv = "GRPC_XDS_EXPERIMENTAL_ENABLE_AGGREGATE_AND_LOGICAL_DNS_CLUSTER" rbacSupportEnv = "GRPC_XDS_EXPERIMENTAL_RBAC" federationEnv = "GRPC_EXPERIMENTAL_XDS_FEDERATION" + rlsInXDSEnv = "GRPC_EXPERIMENTAL_XDS_RLS_LB" c2pResolverTestOnlyTrafficDirectorURIEnv = "GRPC_TEST_ONLY_GOOGLE_C2P_RESOLVER_TRAFFIC_DIRECTOR_URI" ) @@ -85,6 +86,12 @@ var ( // XDSFederation indicates whether federation support is enabled. XDSFederation = strings.EqualFold(os.Getenv(federationEnv), "true") + // XDSRLS indicates whether processing of Cluster Specifier plugins and + // support for the RLS CLuster Specifier is enabled, which can be enabled by + // setting the environment variable "GRPC_EXPERIMENTAL_XDS_RLS_LB" to + // "true". + XDSRLS = strings.EqualFold(os.Getenv(rlsInXDSEnv), "true") + // C2PResolverTestOnlyTrafficDirectorURI is the TD URI for testing. C2PResolverTestOnlyTrafficDirectorURI = os.Getenv(c2pResolverTestOnlyTrafficDirectorURIEnv) ) diff --git a/vendor/google.golang.org/grpc/internal/grpclog/grpclog.go b/vendor/google.golang.org/grpc/internal/grpclog/grpclog.go index e6f975cbf6a..30a3b4258fc 100644 --- a/vendor/google.golang.org/grpc/internal/grpclog/grpclog.go +++ b/vendor/google.golang.org/grpc/internal/grpclog/grpclog.go @@ -115,12 +115,12 @@ type LoggerV2 interface { // Notice: This type is EXPERIMENTAL and may be changed or removed in a // later release. type DepthLoggerV2 interface { - // InfoDepth logs to INFO log at the specified depth. Arguments are handled in the manner of fmt.Print. + // InfoDepth logs to INFO log at the specified depth. Arguments are handled in the manner of fmt.Println. InfoDepth(depth int, args ...interface{}) - // WarningDepth logs to WARNING log at the specified depth. Arguments are handled in the manner of fmt.Print. + // WarningDepth logs to WARNING log at the specified depth. Arguments are handled in the manner of fmt.Println. WarningDepth(depth int, args ...interface{}) - // ErrorDetph logs to ERROR log at the specified depth. Arguments are handled in the manner of fmt.Print. + // ErrorDepth logs to ERROR log at the specified depth. Arguments are handled in the manner of fmt.Println. ErrorDepth(depth int, args ...interface{}) - // FatalDepth logs to FATAL log at the specified depth. Arguments are handled in the manner of fmt.Print. + // FatalDepth logs to FATAL log at the specified depth. Arguments are handled in the manner of fmt.Println. FatalDepth(depth int, args ...interface{}) } diff --git a/vendor/google.golang.org/grpc/internal/grpcutil/regex.go b/vendor/google.golang.org/grpc/internal/grpcutil/regex.go index 2810a8ba2fd..7a092b2b804 100644 --- a/vendor/google.golang.org/grpc/internal/grpcutil/regex.go +++ b/vendor/google.golang.org/grpc/internal/grpcutil/regex.go @@ -20,9 +20,12 @@ package grpcutil import "regexp" -// FullMatchWithRegex returns whether the full string matches the regex provided. -func FullMatchWithRegex(re *regexp.Regexp, string string) bool { +// FullMatchWithRegex returns whether the full text matches the regex provided. +func FullMatchWithRegex(re *regexp.Regexp, text string) bool { + if len(text) == 0 { + return re.MatchString(text) + } re.Longest() - rem := re.FindString(string) - return len(rem) == len(string) + rem := re.FindString(text) + return len(rem) == len(text) } diff --git a/vendor/google.golang.org/grpc/regenerate.sh b/vendor/google.golang.org/grpc/regenerate.sh index a0a71aae968..58c802f8aec 100644 --- a/vendor/google.golang.org/grpc/regenerate.sh +++ b/vendor/google.golang.org/grpc/regenerate.sh @@ -76,7 +76,21 @@ SOURCES=( # These options of the form 'Mfoo.proto=bar' instruct the codegen to use an # import path of 'bar' in the generated code when 'foo.proto' is imported in # one of the sources. -OPTS=Mgrpc/service_config/service_config.proto=/internal/proto/grpc_service_config,Mgrpc/core/stats.proto=google.golang.org/grpc/interop/grpc_testing/core +# +# Note that the protos listed here are all for testing purposes. All protos to +# be used externally should have a go_package option (and they don't need to be +# listed here). +OPTS=Mgrpc/service_config/service_config.proto=/internal/proto/grpc_service_config,\ +Mgrpc/core/stats.proto=google.golang.org/grpc/interop/grpc_testing/core,\ +Mgrpc/testing/benchmark_service.proto=google.golang.org/grpc/interop/grpc_testing,\ +Mgrpc/testing/stats.proto=google.golang.org/grpc/interop/grpc_testing,\ +Mgrpc/testing/report_qps_scenario_service.proto=google.golang.org/grpc/interop/grpc_testing,\ +Mgrpc/testing/messages.proto=google.golang.org/grpc/interop/grpc_testing,\ +Mgrpc/testing/worker_service.proto=google.golang.org/grpc/interop/grpc_testing,\ +Mgrpc/testing/control.proto=google.golang.org/grpc/interop/grpc_testing,\ +Mgrpc/testing/test.proto=google.golang.org/grpc/interop/grpc_testing,\ +Mgrpc/testing/payloads.proto=google.golang.org/grpc/interop/grpc_testing,\ +Mgrpc/testing/empty.proto=google.golang.org/grpc/interop/grpc_testing for src in ${SOURCES[@]}; do echo "protoc ${src}" @@ -85,7 +99,6 @@ for src in ${SOURCES[@]}; do -I${WORKDIR}/grpc-proto \ -I${WORKDIR}/googleapis \ -I${WORKDIR}/protobuf/src \ - -I${WORKDIR}/istio \ ${src} done @@ -96,7 +109,6 @@ for src in ${LEGACY_SOURCES[@]}; do -I${WORKDIR}/grpc-proto \ -I${WORKDIR}/googleapis \ -I${WORKDIR}/protobuf/src \ - -I${WORKDIR}/istio \ ${src} done diff --git a/vendor/google.golang.org/grpc/version.go b/vendor/google.golang.org/grpc/version.go index 8ef0958797f..9d3fd73da94 100644 --- a/vendor/google.golang.org/grpc/version.go +++ b/vendor/google.golang.org/grpc/version.go @@ -19,4 +19,4 @@ package grpc // Version is the current grpc version. -const Version = "1.43.0" +const Version = "1.44.1-dev" diff --git a/vendor/modules.txt b/vendor/modules.txt index b75390cf1be..1d7f162e47a 100644 --- a/vendor/modules.txt +++ b/vendor/modules.txt @@ -146,7 +146,7 @@ github.com/cloudevents/sdk-go/v2/event/datacodec/xml github.com/cloudevents/sdk-go/v2/protocol github.com/cloudevents/sdk-go/v2/protocol/http github.com/cloudevents/sdk-go/v2/types -# github.com/containerd/containerd v1.5.8 +# github.com/containerd/containerd v1.5.9 ## explicit github.com/containerd/containerd/errdefs github.com/containerd/containerd/log @@ -221,7 +221,7 @@ github.com/google/go-cmp/cmp/internal/diff github.com/google/go-cmp/cmp/internal/flags github.com/google/go-cmp/cmp/internal/function github.com/google/go-cmp/cmp/internal/value -# github.com/google/go-containerregistry v0.8.1-0.20220110151055-a61fd0a8e2bb +# github.com/google/go-containerregistry v0.8.1-0.20220211173031-41f8d92709b7 ## explicit github.com/google/go-containerregistry/internal/and github.com/google/go-containerregistry/internal/estargz @@ -296,7 +296,7 @@ github.com/josharian/intern github.com/json-iterator/go # github.com/kelseyhightower/envconfig v1.4.0 github.com/kelseyhightower/envconfig -# github.com/klauspost/compress v1.13.6 +# github.com/klauspost/compress v1.14.2 github.com/klauspost/compress github.com/klauspost/compress/fse github.com/klauspost/compress/huff0 @@ -322,7 +322,7 @@ github.com/modern-go/concurrent github.com/modern-go/reflect2 # github.com/opencontainers/go-digest v1.0.0 github.com/opencontainers/go-digest -# github.com/opencontainers/image-spec v1.0.3-0.20211202222133-eacdcc10569b +# github.com/opencontainers/image-spec v1.0.3-0.20220114050600-8b9d41f48198 ## explicit github.com/opencontainers/image-spec/specs-go github.com/opencontainers/image-spec/specs-go/v1 @@ -443,8 +443,7 @@ golang.org/x/crypto/pkcs12/internal/rc2 golang.org/x/mod/internal/lazyregexp golang.org/x/mod/module golang.org/x/mod/semver -# golang.org/x/net v0.0.0-20220114011407-0dd24b26b47d -## explicit +# golang.org/x/net v0.0.0-20220127074510-2fabfed7e28f golang.org/x/net/context golang.org/x/net/context/ctxhttp golang.org/x/net/http/httpguts @@ -473,7 +472,6 @@ golang.org/x/sys/plan9 golang.org/x/sys/unix golang.org/x/sys/windows # golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 -## explicit golang.org/x/term # golang.org/x/text v0.3.7 golang.org/x/text/secure/bidirule @@ -515,11 +513,11 @@ google.golang.org/appengine/internal/modules google.golang.org/appengine/internal/remote_api google.golang.org/appengine/internal/urlfetch google.golang.org/appengine/urlfetch -# google.golang.org/genproto v0.0.0-20220111164026-67b88f271998 +# google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350 google.golang.org/genproto/googleapis/api/httpbody google.golang.org/genproto/googleapis/rpc/status google.golang.org/genproto/protobuf/field_mask -# google.golang.org/grpc v1.43.0 +# google.golang.org/grpc v1.44.0 ## explicit google.golang.org/grpc google.golang.org/grpc/attributes