From 1cf8c1db5a292f8b3dc7650b3adc3ba6aa5bc9b9 Mon Sep 17 00:00:00 2001 From: Sam DeLuca Date: Tue, 7 May 2019 15:46:30 -0500 Subject: [PATCH] Rc to master (#25) * wip * wip * initial working version of gcs artifact storage * addressing pr feedback * updating codegen * wip * fixing issue with workflow saving * check to see if stat result is nil * adding a jenkinsfile * a small change which will hopefully speed up jenkins builds a lot * cleanup of docker push logic * cleanup of docker push logic * cleanup of docker push logic * cleanup of docker push logic * cleanup of docker push logic * changing the import path * preserving original link in a readme * use semantic version tagging (#9) * [CSE-11] adding config file loader (#10) * adding configmap loader * PR #10 should have been a minor version not a patch (#11) * adding autodeploy to jenkinsfile (#12) * fixing autodeployments (#13) * [CSE-13] extended error handling for workflows (#16) * wip * mixed case imports cause all sorts of problems, switch to lowercase * fixing build issu * fixing error deserialization * fixing error deserialization * unmatched string logic * make workflows fail on error trigger * properly evaluate workflow failures * dev version bump * serialize errors and warnings into wf crd * ugh go types * rewriting error handling to support file sources * temporarily commenting out a test * fixing warning handler * fixing error handling fixing executor fixing executor Fixing executor fixing executor fixing executor fixing executor asdf fixing executor fixing operator operator fixing executor cleanup * cleaning up types * updating codegen * fixing version * updating codegen * add podname and stage name to error result * version 2.5.0->2.4.0 * ErrorCondition->ExceptionCondition * codegen * [CS-14] merging UI update into rc (#18) * update node * UI tweaks * fixing a comment * versionbump * fixing build errors * [CSE-57] Upgrade argo (#24) * Updated ARTIFACT_REPO.md (#1049) * Updated examples/README.md (#1051) * Support for K8s API based Executor (#1010) * Submodules are dirty after checkout -- need to update (#1052) * Parameter and Argument names should support snake case (#1048) * Add namespace explicitly to pod metadata (#1059) * Update dependencies to K8s v1.12 and client-go 9.0 * Adding SAP Hybris in Who uses Argo (#1064) * Add Cratejoy to list of users (#1063) * Raise not implemented error when artifact saving is unsupported (#1062) * Adding native GCS support for artifact storage and retrieval * Support nested steps workflow parallelism (#1046) * Auto-complete workflow names (#1061) * Auto-complete workflow names * Use cobra revision at fe5e611709b0c57fa4a89136deaa8e1d4004d053 * Fix string format arguments in workflow utilities. (#1070) * fix #1078 Azure AKS authentication issues (#1079) * Issue #740 - System level workflow parallelism limits & priorities (#1065) * Issue #740 - System level workflow parallelism limits & priorities * Apply reviewer notes * Add new article and minor edits. (#1083) * Update docs to outline bare minimum set of privileges for a workflow * Use relative links on README file (#1087) * Fix typo in demo.md (#1089) Fix a small typo in demo.md that I encounted when reading through the getting started guide. * Drop reference to removed `argo install` command. (#1074) * Initialize child node before marking phase. Fixes panic on invalid `When` (#1075) * #1081 added retry logic to s3 load and save function (#1082) * adding logo to be used by the OS Site (#1099) * Update ROADMAP.md * Update docs with examples using the K8s REST API * Issue #1114 - Set FORCE_NAMESPACE_ISOLATION env variable in namespace install manifests (#1116) * Fix examples docs of parameters. (#1110) * Remove docker_lib mount volume which is not needed anymore (#1115) * Remove docker_lib mount volume which is not needed anymore * Remove unused hostPathDir * add support for ppc64le and s390x (#1102) * Install mime-support in argoexec to set proper mime types for S3 artifacts (resolves #1119) * Adding Quantibio in Who uses Argo (#1111) * Adding Quantibio in Who uses Argo * fix spelling mistake * Fix output artifact and parameter conflict (#1125) `SaveArtifacts` deletes the files that `SaveParameters` might still need, so we're calling `SaveParameters` first. Fixes https://github.com/argoproj/argo/issues/1124 * Update generated swagger to fix verify-codegen (#1131) * Allow owner reference to be set in submit util (#1120) * Issue #1104 - Remove container wait timeout from 'argo logs --follow' (#1142) * Issue #1132 - Fix panic in ttl controller (#1143) * Issue #1040 - Kill daemoned step if workflow consist of single daemoned step (#1144) * Fix global artifact overwriting in nested workflow (#1086) * Fix issue where steps with exhausted retires would not complete (#1148) * add support for other archs (#1137) * Reflect minio chart changes in documentation (#1147) * Issue #1136 - Fix metadata for DAG with loops (#1149) * Issue #1136 - Fix metadata for DAG with loops * Add slack badge to README (#1164) * Fix failing TestAddGlobalArtifactToScope unit test * Fix tests compilation error (#1157) * Replace exponential retry with poll (#1166) * add support for hostNetwork & dnsPolicy config (#1161) * Support HDFS Artifact (#1159) Support HDFS Artifact (#1159) * Update codegen for network config (#1168) * Add GitHub to users in README.md (#1151) * Add Preferred Networks to users in README.md (#1172) * Add missing patch in namespace kustomization.yaml (#1170) * Validate ArchiveLocation artifacts (#1167) * Update README and preview notice in CLA. * Update README. (#1173) (#1176) * Argo users: Equinor (#1175) * Do not mount unnecessary docker socket (#1178) * Issue #1113 - Wait for daemon pods completion to handle annotations (#1177) * Issue #1113 - Wait for daemon pods completion to handle annotations * Add output artifacts to influxdb-ci example * Increased S3 artifact retry time and added log (#1138) * Issue #1123 - Fix 'kubectl get' failure if resource namespace is different from workflow namespace (#1171) * Refactor Makefile/Dockerfile to remove volume binding in favor of build stages (#1189) * Add Docker Hub build hooks * Add documentation how to use parameter-file's (#1191) * Issue #988 - Submit should not print logs to stdout unless output is 'wide' (#1192) * Fix missing docker binary in argoexec image. Improve reuse of image layers * Fischerjulian adds ruby to rest docs (#1196) * Adds link to ruby kubernetes library. * Links to a ruby example on how to start a workflow * Updated OWNERS (#1198) * Update community/README (#1197) * Issue #1128 - Use polling instead of fs notify to get annotation changes (#1194) * Minor spelling, formatting, and style updates. (#1193) * Dockerfile: argoexec base image correction (fixes #1209) (#1213) * Set executor image pull policy for resource template (#1174) * Add schedulerName to workflow and template spec (#1184) * Issue #1190 - Fix incorrect retry node handling (#1208) * fix dag retries (#1221) * Executor can access the k8s apiserver with a out-of-cluster config file (#1134) Executor can access the k8s apiserver with a out-of-cluster config file * Update README with typo fixes (#1220) * Update README.md (#1236) * Remove extra quotes around output parameter value (#1232) Ensure we do not insert extra single quotes when using valueFrom: jsonPath to set the value of an output parameter for resource templates. Signed-off-by: Ilias Katsakioris * Update README.md (#1224) * Include stderr when retrieving docker logs (#1225) * Add Gardener to "Who uses Argo" (#1228) * Add feature to continue workflow on failed/error steps/tasks (#1205) * Fix the Prometheus address references (#1237) * Fixed Issue#1223 Kubernetes Resource action: patch is not supported (#1245) * Fixed Issue#1223 Kubernetes Resource action: patch is not supported This PR is fixed the Issue#1223 reported by @shanesiebken . Argo kubernetes resource workflow failed on patch action. --patch or -p option is required for kubectl patch action. This PR is including the manifest yaml as patch argument for kubectl. This Fix will support the Patch action in Argo kubernetes resource workflow. This Fix will support only JSON merge strategic in patch action * udpated formating * typo, executo -> executor (#1243) * Issue#1165 fake outputs don't notify and task completes successfully (#1247) * Issue#1165 fake outputs don't notify and task completes successfully This PR is addressing the Issue#1165 reported by @alexfrieden. Issue/Bug: Argo is finishing the task successfully even artifact /file does exist. Fix: Validate the created gzip contains artifact or file. if file/artifact doesn't exist, Current step/stage/task will be failed with log message . Sample Log: ''' INFO[0029] Updating node artifact-passing-lkvj8[0].generate-artifact (artifact-passing-lkvj8-1949982165) status Running -> Error INFO[0029] Updating node artifact-passing-lkvj8[0].generate-artifact (artifact-passing-lkvj8-1949982165) message: failed to save outputs: File or Artifact does not exist. /tmp/hello_world.txt INFO[0029] Step group node artifact-passing-lkvj8[0] (artifact-passing-lkvj8-1067333159) deemed failed: child 'artifact-passing-lkvj8-1949982165' failed namespace=default workflow=artifact-passing-lkvj8 INFO[0029] node artifact-passing-lkvj8[0] (artifact-passing-lkvj8-1067333159) phase Running -> Failed namespace=default workflow=artifact-passing-lkvj8 ''' * fixed gometalinter errcheck issue * Git cloning via SSH was not verifying host public key (#1261) * Update versions (#1218) * Proxy Priority and PriorityClassName to pods (#1179) * Error running 1000s of tasks: "etcdserver: request is too large" #1186 (#1264) * Error running 1000s of tasks: "etcdserver: request is too large" #1186 This PR is addressing the feature request #1186. Issue: Nodestatus element keeps growing for big workflow. Workflow will fail once the workflow total size reachs 1 MB (maz size limit in ETCD) . Solution: Compressing the Nodestatus once size reachs the 1 MB which increasing 60% to 80% more steps to execute in compress mode. Latest: Argo cli and Argo UI will able to decode and print nodestatus from compressednoode. Limitation: Kubectl willl not decode the compressedNode element * added Operator.go * revert the testing yaml * Fixed the lint issue * fixed * fixed lint * Fixed Testcase * incorporated the review comments * Reverted the change * incorporated review comments * fixing gometalinter checks * incorporated review comments * Update pod-limits.yaml * updated few comments * updated error message format * reverted unwanted files * Reduce redundancy pod label action (#1271) * Add the `mergeStrategy` option to resource patching (#1269) * This adds the ability to pass a mergeStrategy to a patch resource. this is valuable because the default merge strategy for kubernetes is 'strategic', which does not work with Custom Resources. * This also updates the resource example to demonstrate how it is used * Fix bug with DockerExecutor's CopyFile (#1275) The check to see if the source path was in the tgz archive was wrong when source path was a folder, the arguments to strings.Contains were inverted. * Add workflow labels and annotations global vars (#1280) * Argo CI is current inactive (#1285) * Issue#896 Workflow steps with non-existant output artifact path will succeed (#1277) * Issue#896 Workflow steps with non-existant output artifact path will succeed Issue: https://github.com/argoproj/argo/issues/897 Solution: Added new element "optional" in Artifact. The default is false. This flag will make artifact as optional and existence check will be ignored if input/output artifact has optional=true. Output Artifact ( optional=true ): Artifact existence check will be ignored during the save artifact in destination and continued workflow Input Artifact ( optional=true ): Artifact exist check will be ignored during load artifact from source and continued workflow * added end of line * removed unwanted whitespace * Deleted test code * go formatted * added formatting directives * updated Codegen * Fixed format on merge conflict * format fix * updated comments * improved error case * Fix for Resource creation where template has same parameter templating (#1283) * Fix for Resource creation where template has same parameter templating This PR will enable to support the custom template variable reference. Soulltion: Workflow variable reference resolve will check the Workflow variable prefix. * added test * fixed gofmt issue * fixed format * fixed gofmt on common.go * fixed testcase * fixed gofmt * Added unit testcase and documented * fixed Gofmt format * updated comments * Admiralty: add link to blog post, add user (#1295) * Add dns config support (#1301) * Speed up podReconciliation using parallel goroutine (#1286) * Speed up podReconciliation using parallel goroutine * Fix make lint issue * put checkandcompress back * Add community meeting notes link (#1304) * Add Karius to users in README.md (#1305) * Added support for artifact path references (#1300) * Added support for artifact path references Adds new `{{inputs.artifacts..path}}` and `{{outputs.artifacts..path}}` placeholders. * Add support for init containers (#1183) * Secrets should be passed to pods using volumes instead of API calls (#1302) * Secrets should be passed to pods using downward API instead of API calls * Fixed Gogfmt format * fixed file close Gofmt * updated review comments * fixed gofmt * updated review comments * CheckandEstimate implementation to optimize podReconciliation (#1308) * CheckandEstimate implementation * fixed variable rename * fixed gofmt * fixed feedbacks * Update operator.go * Update operator.go * Add alibaba cloud to officially using argo list (#1313) * Refactor checkandEstimate to optimize podReconciliation (#1311) * Refactor checkandEstimate to optimize podReconciliation * Move compress function to persistUpdates * Fix formatting issues in examples documentation (#1310) * Fix nil pointer dereference with secret volumes (#1314) * Archive location should conditionally be added to template only when needed * Fix SIGSEGV in watch/CheckAndDecompress. Consolidate duplicate code (resolves #1315) * Implement support for PNS (Process Namespace Sharing) executor (#1214) * Implements PNS (Process Namespace Sharing) executor * Adds limited support for Kubelet/K8s API artifact collection by mirroring volume mounts to wait sidecar * Adds validation to detect when output artifacts are not supported by the executor * Adds ability to customize executor from workflow-controller-configmap (e.g. add environment variables, append command line args such as loglevel) * Fixes an issue where daemon steps were not getting terminated properly * Reorganize manifests to kustomize 2 and update version to v2.3.0-rc1 * Update v2.3.0 CHANGELOG.md * Export the methods of `KubernetesClientInterface` (#1294) All calls to these methods previously generated a panic at runtime because the calls resolved to the default, panic-always implementation, not to the overrides provided by `k8sAPIClient` and `kubeletClient`. Embedding an exported interface with unexported methods into a struct is the only way to implement that interface in another package. When doing this, the compiler generates default, panic-always implementations for all methods from the interface. Implementors can override exported methods, but it's not possible to override an unexported method from the interface. All invocations that go through the interface will come to the default implementation, even if the struct tries to provide an override. * Update README.md (#1321) * Issue1316 Pod creation with secret volumemount (#1318) * CheckandEstimate implementation * fixed variable rename * fixed gofmt * fixed feedbacks * Fixed the duplicate mountpath issue * Support parameter substitution in the volumes attribute (#1238) * `argo list` was not displaying non-zero priorities correctly * Fix regression where argoexec wait would not return when podname was too long * wait will conditionally become privileged if main/sidecar privileged (resolves #1323) * Update version to v2.3.0-rc2. Update changelog * Add documentation on releasing * use a secret selector for getting credentials * fixing build issues * linter issues * fixing jenkinsfile(?) * jenkins * jenkins * jenkins * jenkins * jenkins? * jenkins :( * jenkins :( * jenkins * jenkins * jenkins * jenkins * gopkg * use GetSecretFromVolMount instead of GetSecrets * actually build argoexec * Fix #1340 parameter substitution bug Signed-off-by: Ilias Katsakioris * fixing gcs upload method * disable autodeploy --- .argo-ci/ci.yaml | 22 +- .dockerignore | 8 +- ARTIFACT_REPO.md | 4 +- Branding Assets.md | 2 +- CHANGELOG.md | 89 ++ CONTRIBUTING.md | 25 +- Dockerfile | 99 ++ Dockerfile-argoexec | 16 - Dockerfile-builder | 32 - Dockerfile-ci-builder | 12 - Dockerfile-cli | 4 - Dockerfile-workflow-controller | 5 - Dockerfile.argoexec-dev | 5 + Dockerfile.workflow-controller-dev | 6 + Gopkg.lock | 801 +++++------ Gopkg.toml | 15 + Jenkinsfile | 95 ++ Makefile | 153 ++- OWNERS | 3 + README.md | 53 +- ROADMAP.md | 23 +- VERSION | 2 +- api/openapi-spec/swagger.json | 541 ++++++-- cmd/argo/commands/common.go | 12 +- cmd/argo/commands/delete.go | 2 +- cmd/argo/commands/get.go | 32 +- cmd/argo/commands/lint.go | 4 +- cmd/argo/commands/list.go | 20 +- cmd/argo/commands/logs.go | 161 ++- cmd/argo/commands/resubmit.go | 2 +- cmd/argo/commands/resume.go | 2 +- cmd/argo/commands/retry.go | 2 +- cmd/argo/commands/root.go | 2 +- cmd/argo/commands/submit.go | 10 +- cmd/argo/commands/suspend.go | 2 +- cmd/argo/commands/terminate.go | 2 +- cmd/argo/commands/wait.go | 2 +- cmd/argo/commands/watch.go | 5 +- cmd/argo/main.go | 2 +- cmd/argoexec/commands/init.go | 25 +- cmd/argoexec/commands/resource.go | 58 +- cmd/argoexec/commands/root.go | 119 +- cmd/argoexec/commands/wait.go | 55 +- cmd/argoexec/main.go | 4 +- cmd/workflow-controller/main.go | 27 +- community/Argo Individual CLA.pdf | Bin 65876 -> 60326 bytes community/README.md | 17 +- demo.md | 43 +- docs/README.md | 9 + docs/example-golang/main.go | 77 ++ docs/releasing.md | 38 + docs/rest-api.md | 40 + docs/variables.md | 5 + docs/workflow-controller-configmap.yaml | 56 +- errors/errors.go | 2 +- errors/errors_test.go | 2 +- examples/README.md | 361 +++-- examples/artifact-disable-archive.yaml | 51 + examples/artifact-passing.yaml | 2 +- examples/artifact-path-placeholders.yaml | 40 + examples/ci-output-artifact.yaml | 7 +- examples/continue-on-fail.yaml | 36 + examples/dag-continue-on-fail.yaml | 44 + examples/dns-config.yaml | 22 + examples/extended-errors.yaml | 40 + examples/global-outputs.yaml | 2 +- examples/hdfs-artifact.yaml | 81 ++ examples/influxdb-ci.yaml | 4 + examples/init-container.yaml | 22 + examples/input-artifact-git.yaml | 4 +- examples/output-parameter.yaml | 2 +- examples/parameter-aggregation-dag.yaml | 1 + examples/parameter-aggregation.yaml | 1 + examples/sidecar-dind.yaml | 2 +- gometalinter.json | 3 +- hack/gen-openapi-spec/main.go | 6 +- hack/ssh_known_hosts | 8 + hack/update-codegen.sh | 2 +- hack/update-manifests.sh | 25 +- hack/update-openapigen.sh | 4 +- hack/update-ssh-known-hosts.sh | 24 + hooks/README.md | 16 + hooks/build | 7 + hooks/push | 7 + .../argo-ui-deployment.yaml} | 4 +- .../argo-ui-sa.yaml} | 0 .../argo-ui-service.yaml} | 0 manifests/base/argo-ui/kustomization.yaml | 7 + manifests/base/crds/kustomization.yaml | 5 + .../workflow-crd.yaml} | 0 manifests/base/kustomization.yaml | 15 + .../workflow-controller/kustomization.yaml | 7 + .../workflow-controller-configmap.yaml} | 0 .../workflow-controller-deployment.yaml} | 6 +- .../workflow-controller-sa.yaml} | 0 .../argo-ui-rbac/argo-ui-clusterrole.yaml} | 0 .../argo-ui-clusterrolebinding.yaml} | 0 .../argo-ui-rbac/kustomization.yaml | 6 + manifests/cluster-install/kustomization.yaml | 20 +- .../kustomization.yaml | 7 + .../workflow-aggregate-roles.yaml} | 0 .../workflow-controller-clusterrole.yaml} | 0 ...rkflow-controller-clusterrolebinding.yaml} | 0 manifests/install.yaml | 20 +- manifests/namespace-install.yaml | 16 +- .../argo-ui-role.yaml} | 0 .../argo-ui-rolebinding.yaml} | 0 .../argo-ui-rbac/kustomization.yaml | 6 + .../namespace-install/kustomization.yaml | 23 +- .../overlays/argo-ui-deployment.yaml | 12 + .../workflow-controller-configmap.yaml} | 0 .../kustomization.yaml | 6 + .../workflow-controller-role.yaml} | 0 .../workflow-controller-rolebinding.yaml} | 0 os-project-logo.svg | 140 ++ .../workflow/v1alpha1/openapi_generated.go | 1173 ++++++++++++----- pkg/apis/workflow/v1alpha1/register.go | 2 +- pkg/apis/workflow/v1alpha1/types.go | 256 +++- .../v1alpha1/zz_generated.deepcopy.go | 277 +++- pkg/client/clientset/versioned/clientset.go | 4 +- .../versioned/fake/clientset_generated.go | 6 +- .../clientset/versioned/fake/register.go | 2 +- .../clientset/versioned/scheme/register.go | 2 +- .../workflow/v1alpha1/fake/fake_workflow.go | 2 +- .../v1alpha1/fake/fake_workflow_client.go | 2 +- .../typed/workflow/v1alpha1/workflow.go | 4 +- .../workflow/v1alpha1/workflow_client.go | 4 +- .../informers/externalversions/factory.go | 6 +- .../informers/externalversions/generic.go | 2 +- .../internalinterfaces/factory_interfaces.go | 2 +- .../externalversions/workflow/interface.go | 4 +- .../workflow/v1alpha1/interface.go | 2 +- .../workflow/v1alpha1/workflow.go | 8 +- .../listers/workflow/v1alpha1/workflow.go | 2 +- .../expectedfailures/disallow-unknown.json | 25 - test/e2e/expectedfailures/failed-retries.yaml | 30 + .../input-artifact-not-optional.yaml | 22 + .../output-artifact-not-optional.yaml | 24 + .../pns/pns-output-artifacts.yaml | 39 + .../pns/pns-quick-exit-output-art.yaml | 30 + .../functional/artifact-disable-archive.yaml | 50 +- test/e2e/functional/continue-on-fail.yaml | 1 + .../functional/custom_template_variable.yaml | 32 + test/e2e/functional/dag-argument-passing.yaml | 4 +- test/e2e/functional/git-clone-test.yaml | 2 +- test/e2e/functional/global-outputs-dag.yaml | 2 +- .../functional/global-outputs-variable.yaml | 2 +- test/e2e/functional/init-container.yaml | 1 + .../functional/input-artifact-optional.yaml | 22 + ...g-outputs.yaml => nested-dag-outputs.yaml} | 1 + .../functional/output-artifact-optional.yaml | 24 + .../output-input-artifact-optional.yaml | 40 + .../output-param-different-uid.yaml | 27 + test/e2e/functional/pns-output-params.yaml | 71 + test/e2e/functional/retry-with-artifacts.yaml | 2 +- test/e2e/lintfail/disallow-unknown.yaml | 15 + .../invalid-spec.yaml | 0 .../malformed-spec.yaml} | 0 test/e2e/ui/ui-dag-with-params.yaml | 57 +- test/e2e/ui/ui-nested-steps.yaml | 12 +- test/e2e/wait_test.go | 2 +- test/e2e/workflow_test.go | 2 +- test/test.go | 2 +- ui/README.md | 2 +- util/archive/archive.go | 131 ++ util/archive/archive_test.go | 60 + util/cmd/cmd.go | 2 +- util/file/fileutil.go | 87 ++ util/file/fileutil_test.go | 121 ++ util/retry/retry.go | 2 +- workflow/artifacts/artifactory/artifactory.go | 4 +- .../artifacts/artifactory/artifactory_test.go | 4 +- workflow/artifacts/artifacts.go | 2 +- workflow/artifacts/gcs/gcs.go | 130 ++ workflow/artifacts/git/git.go | 29 +- workflow/artifacts/hdfs/hdfs.go | 217 +++ workflow/artifacts/hdfs/util.go | 53 + workflow/artifacts/http/http.go | 6 +- workflow/artifacts/raw/raw.go | 4 +- workflow/artifacts/raw/raw_test.go | 4 +- workflow/artifacts/s3/s3.go | 31 +- workflow/common/common.go | 47 +- workflow/common/util.go | 180 ++- workflow/controller/config.go | 106 +- workflow/controller/controller.go | 42 +- workflow/controller/controller_test.go | 4 +- workflow/controller/dag.go | 67 +- workflow/controller/dag_test.go | 4 +- workflow/controller/exec_control.go | 28 +- workflow/controller/operator.go | 319 ++++- workflow/controller/operator_test.go | 149 ++- workflow/controller/scope.go | 4 +- workflow/controller/steps.go | 34 +- workflow/controller/steps_test.go | 18 + workflow/controller/suspend.go | 2 +- .../testdata/steps-failed-retries.yaml | 153 +++ workflow/controller/workflowpod.go | 606 ++++++--- workflow/controller/workflowpod_test.go | 340 ++++- workflow/executor/common/common.go | 20 +- workflow/executor/docker/docker.go | 74 +- workflow/executor/executor.go | 687 +++++++--- workflow/executor/executor_test.go | 4 +- workflow/executor/k8sapi/client.go | 26 +- workflow/executor/k8sapi/k8sapi.go | 20 +- workflow/executor/kubelet/client.go | 102 +- workflow/executor/kubelet/kubelet.go | 18 +- .../mocks/ContainerRuntimeExecutor.go | 44 +- workflow/executor/pns/pns.go | 385 ++++++ workflow/executor/resource.go | 87 +- workflow/metrics/collector.go | 20 +- workflow/metrics/server.go | 5 +- workflow/ttlcontroller/ttlcontroller.go | 15 +- workflow/ttlcontroller/ttlcontroller_test.go | 6 +- workflow/util/util.go | 64 +- workflow/util/util_test.go | 2 +- workflow/validate/lint.go | 11 +- workflow/validate/validate.go | 134 +- workflow/validate/validate_test.go | 238 +++- 218 files changed, 8846 insertions(+), 2387 deletions(-) create mode 100644 Dockerfile delete mode 100644 Dockerfile-argoexec delete mode 100644 Dockerfile-builder delete mode 100644 Dockerfile-ci-builder delete mode 100644 Dockerfile-cli delete mode 100644 Dockerfile-workflow-controller create mode 100644 Dockerfile.argoexec-dev create mode 100644 Dockerfile.workflow-controller-dev create mode 100644 Jenkinsfile create mode 100644 docs/README.md create mode 100644 docs/example-golang/main.go create mode 100644 docs/releasing.md create mode 100644 docs/rest-api.md create mode 100644 examples/artifact-disable-archive.yaml create mode 100644 examples/artifact-path-placeholders.yaml create mode 100644 examples/continue-on-fail.yaml create mode 100644 examples/dag-continue-on-fail.yaml create mode 100644 examples/dns-config.yaml create mode 100644 examples/extended-errors.yaml create mode 100644 examples/hdfs-artifact.yaml create mode 100644 examples/init-container.yaml create mode 100644 hack/ssh_known_hosts create mode 100755 hack/update-ssh-known-hosts.sh create mode 100644 hooks/README.md create mode 100755 hooks/build create mode 100755 hooks/push rename manifests/base/{03d_argo-ui-deployment.yaml => argo-ui/argo-ui-deployment.yaml} (89%) rename manifests/base/{03a_argo-ui-sa.yaml => argo-ui/argo-ui-sa.yaml} (100%) rename manifests/base/{03e_argo-ui-service.yaml => argo-ui/argo-ui-service.yaml} (100%) create mode 100644 manifests/base/argo-ui/kustomization.yaml create mode 100644 manifests/base/crds/kustomization.yaml rename manifests/base/{01a_workflow-crd.yaml => crds/workflow-crd.yaml} (100%) create mode 100644 manifests/base/kustomization.yaml create mode 100644 manifests/base/workflow-controller/kustomization.yaml rename manifests/base/{02d_workflow-controller-configmap.yaml => workflow-controller/workflow-controller-configmap.yaml} (100%) rename manifests/base/{02e_workflow-controller-deployment.yaml => workflow-controller/workflow-controller-deployment.yaml} (79%) rename manifests/base/{02a_workflow-controller-sa.yaml => workflow-controller/workflow-controller-sa.yaml} (100%) rename manifests/{base/03b_argo-ui-clusterrole.yaml => cluster-install/argo-ui-rbac/argo-ui-clusterrole.yaml} (100%) rename manifests/{base/03c_argo-ui-clusterrolebinding.yaml => cluster-install/argo-ui-rbac/argo-ui-clusterrolebinding.yaml} (100%) create mode 100644 manifests/cluster-install/argo-ui-rbac/kustomization.yaml create mode 100644 manifests/cluster-install/workflow-controller-rbac/kustomization.yaml rename manifests/{base/01b_workflow-aggregate-roles.yaml => cluster-install/workflow-controller-rbac/workflow-aggregate-roles.yaml} (100%) rename manifests/{base/02b_workflow-controller-clusterrole.yaml => cluster-install/workflow-controller-rbac/workflow-controller-clusterrole.yaml} (100%) rename manifests/{base/02c_workflow-controller-clusterrolebinding.yaml => cluster-install/workflow-controller-rbac/workflow-controller-clusterrolebinding.yaml} (100%) rename manifests/namespace-install/{03b_argo-ui-role.yaml => argo-ui-rbac/argo-ui-role.yaml} (100%) rename manifests/namespace-install/{03c_argo-ui-rolebinding.yaml => argo-ui-rbac/argo-ui-rolebinding.yaml} (100%) create mode 100644 manifests/namespace-install/argo-ui-rbac/kustomization.yaml create mode 100644 manifests/namespace-install/overlays/argo-ui-deployment.yaml rename manifests/namespace-install/{02d_workflow-controller-configmap.yaml => overlays/workflow-controller-configmap.yaml} (100%) create mode 100644 manifests/namespace-install/workflow-controller-rbac/kustomization.yaml rename manifests/namespace-install/{02b_workflow-controller-role.yaml => workflow-controller-rbac/workflow-controller-role.yaml} (100%) rename manifests/namespace-install/{02c_workflow-controller-rolebinding.yaml => workflow-controller-rbac/workflow-controller-rolebinding.yaml} (100%) create mode 100644 os-project-logo.svg delete mode 100644 test/e2e/expectedfailures/disallow-unknown.json create mode 100644 test/e2e/expectedfailures/failed-retries.yaml create mode 100644 test/e2e/expectedfailures/input-artifact-not-optional.yaml create mode 100644 test/e2e/expectedfailures/output-artifact-not-optional.yaml create mode 100644 test/e2e/expectedfailures/pns/pns-output-artifacts.yaml create mode 100644 test/e2e/expectedfailures/pns/pns-quick-exit-output-art.yaml mode change 100644 => 120000 test/e2e/functional/artifact-disable-archive.yaml create mode 120000 test/e2e/functional/continue-on-fail.yaml create mode 100644 test/e2e/functional/custom_template_variable.yaml create mode 120000 test/e2e/functional/init-container.yaml create mode 100644 test/e2e/functional/input-artifact-optional.yaml rename test/e2e/functional/{dag-outputs.yaml => nested-dag-outputs.yaml} (99%) create mode 100644 test/e2e/functional/output-artifact-optional.yaml create mode 100644 test/e2e/functional/output-input-artifact-optional.yaml create mode 100644 test/e2e/functional/output-param-different-uid.yaml create mode 100644 test/e2e/functional/pns-output-params.yaml create mode 100644 test/e2e/lintfail/disallow-unknown.yaml rename test/e2e/{expectedfailures => lintfail}/invalid-spec.yaml (100%) rename test/e2e/{expectedfailures/maformed-spec.yaml => lintfail/malformed-spec.yaml} (100%) create mode 100644 util/archive/archive.go create mode 100644 util/archive/archive_test.go create mode 100644 util/file/fileutil.go create mode 100644 util/file/fileutil_test.go create mode 100644 workflow/artifacts/gcs/gcs.go create mode 100644 workflow/artifacts/hdfs/hdfs.go create mode 100644 workflow/artifacts/hdfs/util.go create mode 100644 workflow/controller/steps_test.go create mode 100644 workflow/controller/testdata/steps-failed-retries.yaml create mode 100644 workflow/executor/pns/pns.go diff --git a/.argo-ci/ci.yaml b/.argo-ci/ci.yaml index b92020b4cd02..84443799a8b3 100644 --- a/.argo-ci/ci.yaml +++ b/.argo-ci/ci.yaml @@ -9,7 +9,7 @@ spec: - name: revision value: master - name: repo - value: https://github.com/argoproj/argo.git + value: https://github.com/cyrusbiotechnology/argo.git templates: - name: argo-ci @@ -22,16 +22,13 @@ spec: value: "{{item}}" withItems: - make controller-image executor-image - - make cli-linux - - make cli-darwin + - make release-clis - name: test template: ci-builder arguments: parameters: - name: cmd - value: "{{item}}" - withItems: - - dep ensure && make lint test verify-codegen + value: dep ensure && make lint test verify-codegen - name: ci-builder inputs: @@ -39,7 +36,7 @@ spec: - name: cmd artifacts: - name: code - path: /go/src/github.com/argoproj/argo + path: /go/src/github.com/cyrusbiotechnology/argo git: repo: "{{workflow.parameters.repo}}" revision: "{{workflow.parameters.revision}}" @@ -47,7 +44,7 @@ spec: image: argoproj/argo-ci-builder:latest command: [sh, -c] args: ["{{inputs.parameters.cmd}}"] - workingDir: /go/src/github.com/argoproj/argo + workingDir: /go/src/github.com/cyrusbiotechnology/argo - name: ci-dind inputs: @@ -55,7 +52,7 @@ spec: - name: cmd artifacts: - name: code - path: /go/src/github.com/argoproj/argo + path: /go/src/github.com/cyrusbiotechnology/argo git: repo: "{{workflow.parameters.repo}}" revision: "{{workflow.parameters.revision}}" @@ -63,14 +60,15 @@ spec: image: argoproj/argo-ci-builder:latest command: [sh, -c] args: ["until docker ps; do sleep 3; done && {{inputs.parameters.cmd}}"] - workingDir: /go/src/github.com/argoproj/argo + workingDir: /go/src/github.com/cyrusbiotechnology/argo env: - name: DOCKER_HOST value: 127.0.0.1 + - name: DOCKER_BUILDKIT + value: "1" sidecars: - name: dind - image: docker:17.10-dind + image: docker:18.09-dind securityContext: privileged: true mirrorVolumeMounts: true - diff --git a/.dockerignore b/.dockerignore index 848b59e797ff..f515f4519087 100644 --- a/.dockerignore +++ b/.dockerignore @@ -1,4 +1,4 @@ -* -!dist -dist/pkg -!Gopkg.* \ No newline at end of file +# Prevent vendor directory from being copied to ensure we are not not pulling unexpected cruft from +# a user's workspace, and are only building off of what is locked by dep. +vendor +dist \ No newline at end of file diff --git a/ARTIFACT_REPO.md b/ARTIFACT_REPO.md index df23309ba568..c126fa5ace86 100644 --- a/ARTIFACT_REPO.md +++ b/ARTIFACT_REPO.md @@ -14,12 +14,12 @@ $ helm install stable/minio --name argo-artifacts --set service.type=LoadBalance Login to the Minio UI using a web browser (port 9000) after obtaining the external IP using `kubectl`. ``` -$ kubectl get service argo-artifacts-minio +$ kubectl get service argo-artifacts ``` On Minikube: ``` -$ minikube service --url argo-artifacts-minio +$ minikube service --url argo-artifacts ``` NOTE: When minio is installed via Helm, it uses the following hard-wired default credentials, diff --git a/Branding Assets.md b/Branding Assets.md index ce86e89e7ee6..2fbe4d73dc95 100644 --- a/Branding Assets.md +++ b/Branding Assets.md @@ -1,3 +1,3 @@ # Argo Branding Assets ## Logo -![Argo Logo](https://github.com/argoproj/argo/blob/master/argo-logo600.png "Argo Logo") +![Argo Logo](https://github.com/cyrusbiotechnology/argo/blob/master/argo-logo600.png "Argo Logo") diff --git a/CHANGELOG.md b/CHANGELOG.md index db4a421e05e1..bd85bfee8910 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,94 @@ # Changelog +## 2.3.0-rc2 (2019-04-21) + +### Changes since 2.3.0-rc1 ++ Support parameter substitution in the volumes attribute (#1238) +- Fix regression where argoexec wait would not return when podname was too long +- wait will conditionally become privileged if main/sidecar privileged (issue #1323) +- `argo list` was not displaying non-zero priorities correctly +- Pod creation with secret volumemount (#1318) +- Export the methods of `KubernetesClientInterface` (#1294) + + +## 2.3.0-rc1 (2019-04-10) + +### Notes about upgrading from v2.2 + +* Secrets are passed to the wait sidecar using volumeMounts instead of performing K8s API calls + performed by the. This is much more secure since it limits the privileges of the workflow pod + to no longer require namespace level secret access. However, as a consequence, workflow pods which + reference a secret that does not exist, will now indefinitely stay in a Pending state, as opposed + to the previous behavior of failing during runtime. + + +### Deprecation Notice +The workflow-controller-configmap introduces a new config field, `executor`, which is a container +spec and provides controls over the executor sidecar container (i.e. `init`/`wait`). The fields +`executorImage`, `executorResources`, and `executorImagePullPolicy` are deprecated and will be +removed in a future release. + +### New Features: ++ Support for PNS (Process Namespace Sharing) executor (#1214) ++ Support for K8s API based Executor (#1010) (@dtaniwaki) ++ Adds limited support for Kubelet/K8s API artifact collection by mirroring volume mounts to wait sidecar ++ Support HDFS Artifact (#1159) (@dtaniwaki) ++ System level workflow parallelism limits & priorities (#1065) ++ Support larger workflows through node status compression (#1264) ++ Support nested steps workflow parallelism (#1046) (@WeiTang114) ++ Add feature to continue workflow on failed/error steps/tasks (#1205) (@schrodit) ++ Parameter and Argument names should support snake case (#1048) (@bbc88ks) ++ Add support for ppc64le and s390x (#1102) (@chenzhiwei) ++ Install mime-support in argoexec to set proper mime types for S3 artifacts ++ Allow owner reference to be set in submit util (#1120) (@nareshku) ++ add support for hostNetwork & dnsPolicy config (#1161) (@Dreamheart) ++ Add schedulerName to workflow and template spec (#1184) (@houz42) ++ Executor can access the k8s apiserver with a out-of-cluster config file (@houz42) ++ Proxy Priority and PriorityClassName to pods (#1179) (@dtaniwaki) ++ Add the `mergeStrategy` option to resource patching (#1269) (@ian-howell) ++ Add workflow labels and annotations global vars (#1280) (@discordianfish) ++ Support for optional input/output artifacts (#1277) ++ Add dns config support (#1301) (@xianlubird) ++ Added support for artifact path references (#1300) (@Ark-kun) ++ Add support for init containers (#1183) (@dtaniwaki) ++ Secrets should be passed to pods using volumes instead of API calls (#1302) ++ Azure AKS authentication issues #1079 (@gerardaus) + +### New Features: +* Update dependencies to K8s v1.12 and client-go 9.0 +* Add namespace explicitly to pod metadata (#1059) (@dvavili) +* Raise not implemented error when artifact saving is unsupported (#1062) (@dtaniwaki) +* Retry logic to s3 load and save function (#1082) (@kshamajain99) +* Remove docker_lib mount volume which is not needed anymore (#1115) (@ywskycn) +* Documentation improvements and fixes (@protochron, @jmcarp, @locona, @kivio, @fischerjulian, @annawinkler, @jdfalko, @groodt, @migggy, @nstott, @adrienjt) +* Validate ArchiveLocation artifacts (#1167) (@dtaniwaki) +* Git cloning via SSH was not verifying host public key (#1261) +* Speed up podReconciliation using parallel goroutine (#1286) (@xianlubird) + + +- Initialize child node before marking phase. Fixes panic on invalid `When` (#1075) (@jmcarp) +- Submodules are dirty after checkout -- need to update (#1052) (@andreimc) +- Fix output artifact and parameter conflict (#1125) (@Ark-kun) +- Remove container wait timeout from 'argo logs --follow' (#1142) +- Fix panic in ttl controller (#1143) +- Kill daemoned step if workflow consist of single daemoned step (#1144) +- Fix global artifact overwriting in nested workflow (#1086) (@WeiTang114) +- Fix issue where steps with exhausted retires would not complete (#1148) +- Fix metadata for DAG with loops (#1149) +- Replace exponential retry with poll (#1166) (@kzadorozhny) +- Dockerfile: argoexec base image correction (#1213) (@elikatsis) +- Set executor image pull policy for resource template (#1174) (@dtaniwaki) +- fix dag retries (#1221) (@houz42) +- Remove extra quotes around output parameter value (#1232) (@elikatsis) +- Include stderr when retrieving docker logs (#1225) (@shahin) +- Fix the Prometheus address references (#1237) (@spacez320) +- Kubernetes Resource action: patch is not supported (#1245) +- Fake outputs don't notify and task completes successfully (#1247) +- Reduce redundancy pod label action (#1271) (@xianlubird) +- Fix bug with DockerExecutor's CopyFile (#1275) +- Fix for Resource creation where template has same parameter templating (#1283) +- Fixes an issue where daemon steps were not getting terminated properly + ## 2.2.1 (2018-10-18) ### Changelog since v2.2.0 diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 58c833e48377..033ea2360a24 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -15,30 +15,37 @@ Go to https://github.com/argoproj/ ## How to suggest a new feature -Go to https://groups.google.com/forum/#!forum/argoproj -* Create a new topic to discuss your feature. +Go to https://github.com/argoproj/ +* Open an issue and discuss it. ## How to setup your dev environment ### Requirements -* Golang 1.10 +* Golang 1.11 * Docker * dep v0.5 * Mac Install: `brew install dep` -* gometalinter v2.0.5 +* gometalinter v2.0.12 ### Quickstart ``` -$ go get github.com/argoproj/argo -$ cd $(go env GOPATH)/src/github.com/argoproj/argo +$ go get github.com/cyrusbiotechnology/argo +$ cd $(go env GOPATH)/src/github.com/cyrusbiotechnology/argo $ dep ensure -vendor-only $ make ``` ### Build workflow-controller and executor images -The following will build the workflow-controller and executor images tagged with the `latest` tag, then push to a personal dockerhub repository: +The following will build the release versions of workflow-controller and executor images tagged +with the `latest` tag, then push to a personal dockerhub repository, `mydockerrepo`: +``` +$ make controller-image executor-image IMAGE_TAG=latest IMAGE_NAMESPACE=mydockerrepo DOCKER_PUSH=true +``` +Building release versions of the images will be slow during development, since the build happens +inside a docker build context, which cannot re-use the golang build cache between builds. To build +images quicker (for development purposes), images can be built by adding DEV_IMAGE=true. ``` -$ make controller-image executor-image IMAGE_TAG=latest IMAGE_NAMESPACE=jessesuen DOCKER_PUSH=true +$ make controller-image executor-image IMAGE_TAG=latest IMAGE_NAMESPACE=mydockerrepo DOCKER_PUSH=true DEV_IMAGE=true ``` ### Build argo cli @@ -49,6 +56,6 @@ $ ./dist/argo version ### Deploying controller with alternative controller/executor images ``` -$ helm install argo/argo --set images.namespace=jessesuen --set +$ helm install argo/argo --set images.namespace=mydockerrepo --set images.controller workflow-controller:latest ``` diff --git a/Dockerfile b/Dockerfile new file mode 100644 index 000000000000..a0464961ef65 --- /dev/null +++ b/Dockerfile @@ -0,0 +1,99 @@ +#################################################################################################### +# Builder image +# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image +# Also used as the image in CI jobs so needs all dependencies +#################################################################################################### +FROM golang:1.11.5 as builder + +RUN apt-get update && apt-get install -y \ + git \ + make \ + wget \ + gcc \ + zip && \ + apt-get clean && \ + rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* + +WORKDIR /tmp + +# Install docker +ENV DOCKER_CHANNEL stable +ENV DOCKER_VERSION 18.09.1 +RUN wget -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz" && \ + tar --extract --file docker.tgz --strip-components 1 --directory /usr/local/bin/ && \ + rm docker.tgz + +# Install dep +ENV DEP_VERSION=0.5.0 +RUN wget https://github.com/golang/dep/releases/download/v${DEP_VERSION}/dep-linux-amd64 -O /usr/local/bin/dep && \ + chmod +x /usr/local/bin/dep + +# Install gometalinter +ENV GOMETALINTER_VERSION=2.0.12 +RUN curl -sLo- https://github.com/alecthomas/gometalinter/releases/download/v${GOMETALINTER_VERSION}/gometalinter-${GOMETALINTER_VERSION}-linux-amd64.tar.gz | \ + tar -xzC "$GOPATH/bin" --exclude COPYING --exclude README.md --strip-components 1 -f- && \ + ln -s $GOPATH/bin/gometalinter $GOPATH/bin/gometalinter.v2 + + +#################################################################################################### +# argoexec-base +# Used as the base for both the release and development version of argoexec +#################################################################################################### +FROM debian:9.6-slim as argoexec-base +# NOTE: keep the version synced with https://storage.googleapis.com/kubernetes-release/release/stable.txt +ENV KUBECTL_VERSION=1.13.4 +RUN apt-get update && \ + apt-get install -y curl jq procps git tar mime-support && \ + rm -rf /var/lib/apt/lists/* && \ + curl -L -o /usr/local/bin/kubectl -LO https://storage.googleapis.com/kubernetes-release/release/v${KUBECTL_VERSION}/bin/linux/amd64/kubectl && \ + chmod +x /usr/local/bin/kubectl +COPY hack/ssh_known_hosts /etc/ssh/ssh_known_hosts +COPY --from=builder /usr/local/bin/docker /usr/local/bin/ + + +#################################################################################################### +# Argo Build stage which performs the actual build of Argo binaries +#################################################################################################### +FROM builder as builder-base + +# A dummy directory is created under $GOPATH/src/dummy so we are able to use dep +# to install all the packages of our dep lock file +COPY Gopkg.toml ${GOPATH}/src/dummy/Gopkg.toml +COPY Gopkg.lock ${GOPATH}/src/dummy/Gopkg.lock + +RUN cd ${GOPATH}/src/dummy && \ + dep ensure -vendor-only && \ + mv vendor/* ${GOPATH}/src/ && \ + rmdir vendor + +WORKDIR /go/src/github.com/cyrusbiotechnology/argo +COPY . . + +FROM builder-base as argo-build +# Perform the build + +ARG MAKE_TARGET="controller executor cli-linux-amd64" +RUN make $MAKE_TARGET + + +#################################################################################################### +# argoexec +#################################################################################################### +FROM argoexec-base as argoexec +COPY --from=argo-build /go/src/github.com/cyrusbiotechnology/argo/dist/argoexec /usr/local/bin/ + + +#################################################################################################### +# workflow-controller +#################################################################################################### +FROM scratch as workflow-controller +COPY --from=argo-build /go/src/github.com/cyrusbiotechnology/argo/dist/workflow-controller /bin/ +ENTRYPOINT [ "workflow-controller" ] + + +#################################################################################################### +# argocli +#################################################################################################### +FROM scratch as argocli +COPY --from=argo-build /go/src/github.com/cyrusbiotechnology/argo/dist/argo-linux-amd64 /bin/argo +ENTRYPOINT [ "argo" ] diff --git a/Dockerfile-argoexec b/Dockerfile-argoexec deleted file mode 100644 index 2461140bbc25..000000000000 --- a/Dockerfile-argoexec +++ /dev/null @@ -1,16 +0,0 @@ -FROM debian:9.5-slim - -RUN apt-get update && \ - apt-get install -y curl jq procps git tar && \ - rm -rf /var/lib/apt/lists/* && \ - curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \ - chmod +x ./kubectl && \ - mv ./kubectl /bin/ - -ENV DOCKER_VERSION=18.06.0 -RUN curl -O https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKER_VERSION}-ce.tgz && \ - tar -xzf docker-${DOCKER_VERSION}-ce.tgz && \ - mv docker/docker /usr/local/bin/docker && \ - rm -rf ./docker - -COPY dist/argoexec /bin/ diff --git a/Dockerfile-builder b/Dockerfile-builder deleted file mode 100644 index 8cb721fcd932..000000000000 --- a/Dockerfile-builder +++ /dev/null @@ -1,32 +0,0 @@ -FROM debian:9.5-slim - -RUN apt-get update && apt-get install -y \ - git \ - make \ - curl \ - wget && \ - apt-get clean && \ - rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* - -# Install go -ENV GO_VERSION 1.10.3 -ENV GO_ARCH amd64 -ENV GOPATH /root/go -ENV PATH ${GOPATH}/bin:/usr/local/go/bin:${PATH} -RUN wget https://storage.googleapis.com/golang/go${GO_VERSION}.linux-${GO_ARCH}.tar.gz && \ - tar -C /usr/local/ -xf /go${GO_VERSION}.linux-${GO_ARCH}.tar.gz && \ - rm /go${GO_VERSION}.linux-${GO_ARCH}.tar.gz && \ - wget https://github.com/golang/dep/releases/download/v0.5.0/dep-linux-amd64 -O /usr/local/bin/dep && \ - chmod +x /usr/local/bin/dep && \ - mkdir -p ${GOPATH}/bin && \ - curl -sLo- https://github.com/alecthomas/gometalinter/releases/download/v2.0.5/gometalinter-2.0.5-linux-amd64.tar.gz | \ - tar -xzC "$GOPATH/bin" --exclude COPYING --exclude README.md --strip-components 1 -f- - -# A dummy directory is created under $GOPATH/src/dummy so we are able to use dep -# to install all the packages of our dep lock file -COPY Gopkg.toml ${GOPATH}/src/dummy/Gopkg.toml -COPY Gopkg.lock ${GOPATH}/src/dummy/Gopkg.lock -RUN cd ${GOPATH}/src/dummy && \ - dep ensure -vendor-only && \ - mv vendor/* ${GOPATH}/src/ && \ - rmdir vendor diff --git a/Dockerfile-ci-builder b/Dockerfile-ci-builder deleted file mode 100644 index 943176928518..000000000000 --- a/Dockerfile-ci-builder +++ /dev/null @@ -1,12 +0,0 @@ -FROM golang:1.10.3 - -WORKDIR /tmp - -RUN curl -O https://download.docker.com/linux/static/stable/x86_64/docker-18.06.0-ce.tgz && \ - tar -xzf docker-18.06.0-ce.tgz && \ - mv docker/docker /usr/local/bin/docker && \ - rm -rf ./docker && \ - wget https://github.com/golang/dep/releases/download/v0.5.0/dep-linux-amd64 -O /usr/local/bin/dep && \ - chmod +x /usr/local/bin/dep && \ - curl -sLo- https://github.com/alecthomas/gometalinter/releases/download/v2.0.5/gometalinter-2.0.5-linux-amd64.tar.gz | \ - tar -xzC "$GOPATH/bin" --exclude COPYING --exclude README.md --strip-components 1 -f- diff --git a/Dockerfile-cli b/Dockerfile-cli deleted file mode 100644 index 39f6c45b9523..000000000000 --- a/Dockerfile-cli +++ /dev/null @@ -1,4 +0,0 @@ -FROM alpine:3.7 - -COPY dist/argo-linux-amd64 /bin/argo -ENTRYPOINT [ "/bin/argo" ] diff --git a/Dockerfile-workflow-controller b/Dockerfile-workflow-controller deleted file mode 100644 index b7694f7d0dc6..000000000000 --- a/Dockerfile-workflow-controller +++ /dev/null @@ -1,5 +0,0 @@ -FROM debian:9.4 - -COPY dist/workflow-controller /bin/ - -ENTRYPOINT [ "/bin/workflow-controller" ] diff --git a/Dockerfile.argoexec-dev b/Dockerfile.argoexec-dev new file mode 100644 index 000000000000..e1437f7be80b --- /dev/null +++ b/Dockerfile.argoexec-dev @@ -0,0 +1,5 @@ +#################################################################################################### +# argoexec-dev +#################################################################################################### +FROM argoexec-base +COPY argoexec /usr/local/bin/ diff --git a/Dockerfile.workflow-controller-dev b/Dockerfile.workflow-controller-dev new file mode 100644 index 000000000000..f2132614c852 --- /dev/null +++ b/Dockerfile.workflow-controller-dev @@ -0,0 +1,6 @@ +#################################################################################################### +# workflow-controller-dev +#################################################################################################### +FROM scratch +COPY workflow-controller /bin/ +ENTRYPOINT [ "workflow-controller" ] diff --git a/Gopkg.lock b/Gopkg.lock index eb7e6c92a286..e276f7df7c25 100644 --- a/Gopkg.lock +++ b/Gopkg.lock @@ -2,54 +2,59 @@ [[projects]] - digest = "1:8b95956b70e181b19025c7ba3578fdfd8efbec4ce916490700488afb9218972c" name = "cloud.google.com/go" - packages = ["compute/metadata"] - pruneopts = "" + packages = [ + "compute/metadata", + "iam", + "internal", + "internal/optional", + "internal/trace", + "internal/version", + "storage" + ] revision = "64a2037ec6be8a4b0c1d1f706ed35b428b989239" version = "v0.26.0" [[projects]] - digest = "1:d62e9a41f2e45c103f6c15ffabb3466b3548db41b8cc135a4669794033ee761f" name = "github.com/Azure/go-autorest" packages = [ "autorest", "autorest/adal", "autorest/azure", - "autorest/date", + "autorest/date" ] - pruneopts = "" revision = "1ff28809256a84bb6966640ff3d0371af82ccba4" [[projects]] - digest = "1:b9660f5e3522b899d32b1f9bb98056203d6f76f673e1843eaa00869330103ba5" + name = "github.com/BurntSushi/toml" + packages = ["."] + revision = "3012a1dbe2e4bd1391d42b32f0577cb7bbc7f005" + version = "v0.3.1" + +[[projects]] name = "github.com/Knetic/govaluate" packages = ["."] - pruneopts = "" revision = "9aa49832a739dcd78a5542ff189fb82c3e423116" [[projects]] - digest = "1:8e47871087b94913898333f37af26732faaab30cdb41571136cf7aec9921dae7" name = "github.com/PuerkitoBio/purell" packages = ["."] - pruneopts = "" - revision = "0bcb03f4b4d0a9428594752bd2a3b9aa0a9d4bd4" - version = "v1.1.0" + revision = "44968752391892e1b0d0b821ee79e9a85fa13049" + version = "v1.1.1" [[projects]] branch = "master" - digest = "1:331a419049c2be691e5ba1d24342fc77c7e767a80c666a18fd8a9f7b82419c1c" name = "github.com/PuerkitoBio/urlesc" packages = ["."] - pruneopts = "" revision = "de5bf2ad457846296e2031421a34e2568e304e35" [[projects]] branch = "master" - digest = "1:c3b7ed058146643b16d3a9827550fba317dbff9f55249dfafac7eb6c3652ad23" name = "github.com/argoproj/pkg" packages = [ + "cli", "errors", + "exec", "file", "humanize", "json", @@ -58,67 +63,63 @@ "s3", "stats", "strftime", - "time", + "time" ] - pruneopts = "" - revision = "a581a48d63014312c4f2762787f669e46bdb1fd9" + revision = "7e3ef65c8d44303738c7e815bd9b1b297b39f5c8" [[projects]] - branch = "master" - digest = "1:c0bec5f9b98d0bc872ff5e834fac186b807b656683bd29cb82fb207a1513fabb" name = "github.com/beorn7/perks" packages = ["quantile"] - pruneopts = "" - revision = "3a771d992973f24aa725d07868b467d1ddfceafb" + revision = "4b2b341e8d7715fae06375aa633dbb6e91b3fb46" + version = "v1.0.0" + +[[projects]] + name = "github.com/colinmarc/hdfs" + packages = [ + ".", + "protocol/hadoop_common", + "protocol/hadoop_hdfs", + "rpc" + ] + revision = "48eb8d6c34a97ffc73b406356f0f2e1c569b42a5" [[projects]] - digest = "1:56c130d885a4aacae1dd9c7b71cfe39912c7ebc1ff7d2b46083c8812996dc43b" name = "github.com/davecgh/go-spew" packages = ["spew"] - pruneopts = "" - revision = "346938d642f2ec3594ed81d874461961cd0faa76" - version = "v1.1.0" + revision = "8991bc29aa16c548c550c7ff78260e27b9ab7c73" + version = "v1.1.1" [[projects]] - digest = "1:6098222470fe0172157ce9bbef5d2200df4edde17ee649c5d6e48330e4afa4c6" name = "github.com/dgrijalva/jwt-go" packages = ["."] - pruneopts = "" revision = "06ea1031745cb8b3dab3f6a236daf2b0aa468b7e" version = "v3.2.0" [[projects]] branch = "master" - digest = "1:d6c13a378213e3de60445e49084b8a0a9ce582776dfc77927775dbeb3ff72a35" name = "github.com/docker/spdystream" packages = [ ".", - "spdy", + "spdy" ] - pruneopts = "" revision = "6480d4af844c189cf5dd913db24ddd339d3a4f85" [[projects]] - branch = "master" - digest = "1:f1a75a8e00244e5ea77ff274baa9559eb877437b240ee7b278f3fc560d9f08bf" name = "github.com/dustin/go-humanize" packages = ["."] - pruneopts = "" revision = "9f541cc9db5d55bce703bd99987c9d5cb8eea45e" + version = "v1.0.0" [[projects]] - digest = "1:8a34d7a37b8f07239487752e14a5faafcbbc718fc385ad429a2c4ac6f27a207f" name = "github.com/emicklei/go-restful" packages = [ ".", - "log", + "log" ] - pruneopts = "" - revision = "3eb9738c1697594ea6e71a7156a9bb32ed216cf0" - version = "v2.8.0" + revision = "b9bbc5664f49b6deec52393bd68f39830687a347" + version = "v2.9.3" [[projects]] - digest = "1:ba7c75e38d81b9cf3e8601c081567be3b71bccca8c11aee5de98871360aa4d7b" name = "github.com/emirpasic/gods" packages = [ "containers", @@ -126,228 +127,203 @@ "lists/arraylist", "trees", "trees/binaryheap", - "utils", + "utils" ] - pruneopts = "" - revision = "f6c17b524822278a87e3b3bd809fec33b51f5b46" - version = "v1.9.0" + revision = "1615341f118ae12f353cc8a983f35b584342c9b3" + version = "v1.12.0" [[projects]] - digest = "1:dcefbadf4534c5ecac8573698fba6e6e601157bfa8f96aafe29df31ae582ef2a" name = "github.com/evanphx/json-patch" packages = ["."] - pruneopts = "" - revision = "afac545df32f2287a079e2dfb7ba2745a643747e" - version = "v3.0.0" - -[[projects]] - digest = "1:eb53021a8aa3f599d29c7102e65026242bdedce998a54837dc67f14b6a97c5fd" - name = "github.com/fsnotify/fsnotify" - packages = ["."] - pruneopts = "" - revision = "c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9" - version = "v1.4.7" + revision = "72bf35d0ff611848c1dc9df0f976c81192392fa5" + version = "v4.1.0" [[projects]] branch = "master" - digest = "1:ac2bf6881c6a96d07773dee3b9b2b369bc209c988505bd6cb283a8d549cb8699" name = "github.com/ghodss/yaml" packages = ["."] - pruneopts = "" - revision = "c7ce16629ff4cd059ed96ed06419dd3856fd3577" + revision = "25d852aebe32c875e9c044af3eef9c7dc6bc777f" [[projects]] - digest = "1:858b7fe7b0f4bc7ef9953926828f2816ea52d01a88d72d1c45bc8c108f23c356" - name = "github.com/go-ini/ini" - packages = ["."] - pruneopts = "" - revision = "358ee7663966325963d4e8b2e1fbd570c5195153" - version = "v1.38.1" - -[[projects]] - digest = "1:e116a4866bffeec941056a1fcfd37e520fad1ee60e4e3579719f19a43c392e10" name = "github.com/go-openapi/jsonpointer" packages = ["."] - pruneopts = "" - revision = "3a0015ad55fa9873f41605d3e8f28cd279c32ab2" - version = "0.15.0" + revision = "ef5f0afec364d3b9396b7b77b43dbe26bf1f8004" + version = "v0.19.0" [[projects]] - digest = "1:3830527ef0f4f9b268d9286661c0f52f9115f8aefd9f45ee7352516f93489ac9" name = "github.com/go-openapi/jsonreference" packages = ["."] - pruneopts = "" - revision = "3fb327e6747da3043567ee86abd02bb6376b6be2" - version = "0.15.0" + revision = "8483a886a90412cd6858df4ea3483dce9c8e35a3" + version = "v0.19.0" [[projects]] - digest = "1:6caee195f5da296689270037c5a25c0bc3cc6e54ae5a356e395aa8946356dbc9" name = "github.com/go-openapi/spec" packages = ["."] - pruneopts = "" - revision = "bce47c9386f9ecd6b86f450478a80103c3fe1402" - version = "0.15.0" + revision = "53d776530bf78a11b03a7b52dd8a083086b045e5" + version = "v0.19.0" [[projects]] - digest = "1:22da48dbccb0539f511efbbbdeba68081866892234e57a9d7c7f9848168ae30c" name = "github.com/go-openapi/swag" packages = ["."] - pruneopts = "" - revision = "2b0bd4f193d011c203529df626a65d63cb8a79e8" - version = "0.15.0" + revision = "b3e2804c8535ee0d1b89320afd98474d5b8e9e3b" + version = "v0.19.0" [[projects]] - digest = "1:6e73003ecd35f4487a5e88270d3ca0a81bc80dc88053ac7e4dcfec5fba30d918" name = "github.com/gogo/protobuf" packages = [ "proto", - "sortkeys", + "sortkeys" ] - pruneopts = "" - revision = "636bf0302bc95575d69441b25a2603156ffdddf1" - version = "v1.1.1" + revision = "ba06b47c162d49f2af050fb4c75bcbc86a159d5c" + version = "v1.2.1" [[projects]] branch = "master" - digest = "1:107b233e45174dbab5b1324201d092ea9448e58243ab9f039e4c0f332e121e3a" name = "github.com/golang/glog" packages = ["."] - pruneopts = "" revision = "23def4e6c14b4da8ac2ed8007337bc5eb5007998" [[projects]] - digest = "1:f958a1c137db276e52f0b50efee41a1a389dcdded59a69711f3e872757dab34b" name = "github.com/golang/protobuf" packages = [ "proto", + "protoc-gen-go", + "protoc-gen-go/descriptor", + "protoc-gen-go/generator", + "protoc-gen-go/generator/internal/remap", + "protoc-gen-go/grpc", + "protoc-gen-go/plugin", "ptypes", "ptypes/any", "ptypes/duration", - "ptypes/timestamp", + "ptypes/timestamp" ] - pruneopts = "" - revision = "b4deda0973fb4c70b50d226b1af49f3da59f5265" - version = "v1.1.0" + revision = "b5d812f8a3706043e23a9cd5babf2e5423744d30" + version = "v1.3.1" [[projects]] - branch = "master" - digest = "1:1e5b1e14524ed08301977b7b8e10c719ed853cbf3f24ecb66fae783a46f207a6" name = "github.com/google/btree" packages = ["."] - pruneopts = "" revision = "4030bb1f1f0c35b30ca7009e9ebd06849dd45306" + version = "v1.0.0" [[projects]] - branch = "master" - digest = "1:754f77e9c839b24778a4b64422236d38515301d2baeb63113aa3edc42e6af692" name = "github.com/google/gofuzz" packages = ["."] - pruneopts = "" - revision = "24818f796faf91cd76ec7bddd72458fbced7a6c1" + revision = "f140a6486e521aad38f5917de355cbf147cc0496" + version = "v1.0.0" + +[[projects]] + name = "github.com/googleapis/gax-go" + packages = [ + ".", + "v2" + ] + revision = "beaecbbdd8af86aa3acf14180d53828ce69400b2" + version = "v2.0.4" [[projects]] - digest = "1:16b2837c8b3cf045fa2cdc82af0cf78b19582701394484ae76b2c3bc3c99ad73" name = "github.com/googleapis/gnostic" packages = [ "OpenAPIv2", "compiler", - "extensions", + "extensions" ] - pruneopts = "" revision = "7c663266750e7d82587642f65e60bc4083f1f84e" version = "v0.2.0" [[projects]] - digest = "1:64d212c703a2b94054be0ce470303286b177ad260b2f89a307e3d1bb6c073ef6" name = "github.com/gorilla/websocket" packages = ["."] - pruneopts = "" - revision = "ea4d1f681babbce9545c9c5f3d5194a789c89f5b" - version = "v1.2.0" + revision = "66b9c49e59c6c48f0ffce28c2d8b8a5678502c6d" + version = "v1.4.0" [[projects]] branch = "master" - digest = "1:009a1928b8c096338b68b5822d838a72b4d8520715c1463614476359f3282ec8" name = "github.com/gregjones/httpcache" packages = [ ".", - "diskcache", + "diskcache" ] - pruneopts = "" - revision = "9cad4c3443a7200dd6400aef47183728de563a38" + revision = "3befbb6ad0cc97d4c25d851e9528915809e1a22f" + +[[projects]] + name = "github.com/hashicorp/go-uuid" + packages = ["."] + revision = "4f571afc59f3043a65f8fe6bf46d887b10a01d43" + version = "v1.0.1" [[projects]] - branch = "master" - digest = "1:9c776d7d9c54b7ed89f119e449983c3f24c0023e75001d6092442412ebca6b94" name = "github.com/hashicorp/golang-lru" packages = [ ".", - "simplelru", + "simplelru" ] - pruneopts = "" - revision = "0fb14efe8c47ae851c0034ed7a448854d3d34cf3" + revision = "7087cb70de9f7a8bc0a10c375cb0d2280a8edf9c" + version = "v0.5.1" [[projects]] - digest = "1:7ab38c15bd21e056e3115c8b526d201eaf74e0308da9370997c6b3c187115d36" name = "github.com/imdario/mergo" packages = ["."] - pruneopts = "" - revision = "9f23e2d6bd2a77f959b2bf6acdbefd708a83a4a4" - version = "v0.3.6" + revision = "7c29201646fa3de8506f701213473dd407f19646" + version = "v0.3.7" [[projects]] - digest = "1:870d441fe217b8e689d7949fef6e43efbc787e50f200cb1e70dbca9204a1d6be" name = "github.com/inconshreveable/mousetrap" packages = ["."] - pruneopts = "" revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75" version = "v1.0" [[projects]] branch = "master" - digest = "1:95abc4eba158a39873bd4fabdee576d0ae13826b550f8b710881d80ae4093a0f" name = "github.com/jbenet/go-context" packages = ["io"] - pruneopts = "" revision = "d14ea06fba99483203c19d92cfcd13ebe73135f4" [[projects]] - digest = "1:31c6f3c4f1e15fcc24fcfc9f5f24603ff3963c56d6fa162116493b4025fb6acc" + branch = "master" + name = "github.com/jcmturner/gofork" + packages = [ + "encoding/asn1", + "x/crypto/pbkdf2" + ] + revision = "dc7c13fece037a4a36e2b3c69db4991498d30692" + +[[projects]] name = "github.com/json-iterator/go" packages = ["."] - pruneopts = "" - revision = "f2b4162afba35581b6d4a50d3b8f34e33c144682" + revision = "0ff49de124c6f76f8494e194af75bde0f1a49a29" + version = "v1.1.6" [[projects]] - digest = "1:7fe04787f53bb61c1ba9c659b1a90ee3da16b4d6a1c41566bcb5077efbd30f97" name = "github.com/kevinburke/ssh_config" packages = ["."] - pruneopts = "" - revision = "9fc7bb800b555d63157c65a904c86a2cc7b4e795" - version = "0.4" + revision = "81db2a75821ed34e682567d48be488a1c3121088" + version = "0.5" + +[[projects]] + name = "github.com/konsorten/go-windows-terminal-sequences" + packages = ["."] + revision = "f55edac94c9bbba5d6182a4be46d86a2c9b5b50e" + version = "v1.0.2" [[projects]] branch = "master" - digest = "1:e977ed7b0619844e394c4e725d008ade0840f1882c500a66e797b98bde70cf87" name = "github.com/mailru/easyjson" packages = [ "buffer", "jlexer", - "jwriter", + "jwriter" ] - pruneopts = "" - revision = "03f2033d19d5860aef995fe360ac7d395cd8ce65" + revision = "1ea4449da9834f4d333f1cc461c374aea217d249" [[projects]] - digest = "1:63722a4b1e1717be7b98fc686e0b30d5e7f734b9e93d7dee86293b6deab7ea28" name = "github.com/matttproud/golang_protobuf_extensions" packages = ["pbutil"] - pruneopts = "" revision = "c12348ce28de40eed0136aa2b644d0ee0650e56c" version = "v1.0.1" [[projects]] - digest = "1:619ff8becfc8080f2cc4532ea21437e804038e0431c88e171c381fde96eb06ae" name = "github.com/minio/minio-go" packages = [ ".", @@ -355,228 +331,208 @@ "pkg/encrypt", "pkg/s3signer", "pkg/s3utils", - "pkg/set", + "pkg/set" ] - pruneopts = "" - revision = "70799fe8dae6ecfb6c7d7e9e048fce27f23a1992" - version = "v6.0.5" + revision = "a8704b60278f98501c10f694a9c4df8bdd1fac56" + version = "v6.0.23" [[projects]] - branch = "master" - digest = "1:83854f6b1d2ce047b69657e3a87ba7602f4c5505e8bdfd02ab857db8e983bde1" name = "github.com/mitchellh/go-homedir" packages = ["."] - pruneopts = "" - revision = "58046073cbffe2f25d425fe1331102f55cf719de" + revision = "af06845cf3004701891bf4fdb884bfe4920b3727" + version = "v1.1.0" + +[[projects]] + branch = "master" + name = "github.com/mitchellh/go-ps" + packages = ["."] + revision = "4fdf99ab29366514c69ccccddab5dc58b8d84062" [[projects]] - digest = "1:0c0ff2a89c1bb0d01887e1dac043ad7efbf3ec77482ef058ac423d13497e16fd" name = "github.com/modern-go/concurrent" packages = ["."] - pruneopts = "" revision = "bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94" version = "1.0.3" [[projects]] - digest = "1:e32bdbdb7c377a07a9a46378290059822efdce5c8d96fe71940d87cb4f918855" name = "github.com/modern-go/reflect2" packages = ["."] - pruneopts = "" revision = "4b7aa43c6742a2c18fdef89dd197aaae7dac7ccd" version = "1.0.1" [[projects]] - digest = "1:049b5bee78dfdc9628ee0e557219c41f683e5b06c5a5f20eaba0105ccc586689" name = "github.com/pelletier/go-buffruneio" packages = ["."] - pruneopts = "" revision = "c37440a7cf42ac63b919c752ca73a85067e05992" version = "v0.2.0" [[projects]] branch = "master" - digest = "1:c24598ffeadd2762552269271b3b1510df2d83ee6696c1e543a0ff653af494bc" name = "github.com/petar/GoLLRB" packages = ["llrb"] - pruneopts = "" revision = "53be0d36a84c2a886ca057d34b6aa4468df9ccb4" [[projects]] - digest = "1:b46305723171710475f2dd37547edd57b67b9de9f2a6267cafdd98331fd6897f" name = "github.com/peterbourgon/diskv" packages = ["."] - pruneopts = "" - revision = "5f041e8faa004a95c88a202771f4cc3e991971e6" - version = "v2.0.1" + revision = "0be1b92a6df0e4f5cb0a5d15fb7f643d0ad93ce6" + version = "v3.0.0" [[projects]] - digest = "1:7365acd48986e205ccb8652cc746f09c8b7876030d53710ea6ef7d0bd0dcd7ca" name = "github.com/pkg/errors" packages = ["."] - pruneopts = "" - revision = "645ef00459ed84a119197bfb8d8205042c6df63d" - version = "v0.8.0" + revision = "ba968bfe8b2f7e042a574c888954fccecfa385b4" + version = "v0.8.1" [[projects]] - digest = "1:256484dbbcd271f9ecebc6795b2df8cad4c458dd0f5fd82a8c2fa0c29f233411" name = "github.com/pmezard/go-difflib" packages = ["difflib"] - pruneopts = "" revision = "792786c7400a136282c1664665ae0a8db921c6c2" version = "v1.0.0" [[projects]] - digest = "1:4142d94383572e74b42352273652c62afec5b23f325222ed09198f46009022d1" name = "github.com/prometheus/client_golang" packages = [ "prometheus", - "prometheus/promhttp", + "prometheus/promhttp" ] - pruneopts = "" revision = "c5b7fccd204277076155f10851dad72b76a49317" version = "v0.8.0" [[projects]] branch = "master" - digest = "1:185cf55b1f44a1bf243558901c3f06efa5c64ba62cfdcbb1bf7bbe8c3fb68561" name = "github.com/prometheus/client_model" packages = ["go"] - pruneopts = "" - revision = "5c3871d89910bfb32f5fcab2aa4b9ec68e65a99f" + revision = "fd36f4220a901265f90734c3183c5f0c91daa0b8" [[projects]] - branch = "master" - digest = "1:f477ef7b65d94fb17574fc6548cef0c99a69c1634ea3b6da248b63a61ebe0498" name = "github.com/prometheus/common" packages = [ "expfmt", "internal/bitbucket.org/ww/goautoneg", - "model", + "model" ] - pruneopts = "" - revision = "c7de2306084e37d54b8be01f3541a8464345e9a5" + revision = "a82f4c12f983cc2649298185f296632953e50d3e" + version = "v0.3.0" [[projects]] branch = "master" - digest = "1:e04aaa0e8f8da0ed3d6c0700bd77eda52a47f38510063209d72d62f0ef807d5e" name = "github.com/prometheus/procfs" - packages = [ - ".", - "internal/util", - "nfs", - "xfs", - ] - pruneopts = "" - revision = "05ee40e3a273f7245e8777337fc7b46e533a9a92" + packages = ["."] + revision = "87a4384529e0652f5035fb5cc8095faf73ea9b0b" [[projects]] - digest = "1:3962f553b77bf6c03fc07cd687a22dd3b00fe11aa14d31194f5505f5bb65cdc8" name = "github.com/sergi/go-diff" packages = ["diffmatchpatch"] - pruneopts = "" revision = "1744e2970ca51c86172c8190fadad617561ed6e7" version = "v1.0.0" [[projects]] - digest = "1:3fcbf733a8d810a21265a7f2fe08a3353db2407da052b233f8b204b5afc03d9b" name = "github.com/sirupsen/logrus" packages = ["."] - pruneopts = "" - revision = "3e01752db0189b9157070a0e1668a620f9a85da2" - version = "v1.0.6" + revision = "8bdbc7bcc01dcbb8ec23dc8a28e332258d25251f" + version = "v1.4.1" [[projects]] - digest = "1:9ba49264cef4386aded205f9cb5b1f2d30f983d7dc37a21c780d9db3edfac9a7" name = "github.com/spf13/cobra" packages = ["."] - pruneopts = "" revision = "fe5e611709b0c57fa4a89136deaa8e1d4004d053" [[projects]] - digest = "1:8e243c568f36b09031ec18dff5f7d2769dcf5ca4d624ea511c8e3197dc3d352d" name = "github.com/spf13/pflag" packages = ["."] - pruneopts = "" - revision = "583c0c0531f06d5278b7d917446061adc344b5cd" - version = "v1.0.1" + revision = "298182f68c66c05229eb03ac171abe6e309ee79a" + version = "v1.0.3" [[projects]] - digest = "1:b1861b9a1aa0801b0b62945ed7477c1ab61a4bd03b55dfbc27f6d4f378110c8c" name = "github.com/src-d/gcfg" packages = [ ".", "scanner", "token", - "types", + "types" ] - pruneopts = "" - revision = "f187355171c936ac84a82793659ebb4936bc1c23" - version = "v1.3.0" + revision = "1ac3a1ac202429a54835fe8408a92880156b489d" + version = "v1.4.0" [[projects]] - digest = "1:711eebe744c0151a9d09af2315f0bb729b2ec7637ef4c410fa90a18ef74b65b6" name = "github.com/stretchr/objx" packages = ["."] - pruneopts = "" revision = "477a77ecc69700c7cdeb1fa9e129548e1c1c393c" version = "v0.1.1" [[projects]] - digest = "1:c587772fb8ad29ad4db67575dad25ba17a51f072ff18a22b4f0257a4d9c24f75" name = "github.com/stretchr/testify" packages = [ "assert", "mock", "require", - "suite", + "suite" ] - pruneopts = "" - revision = "f35b8ab0b5a2cef36673838d662e249dd9c94686" - version = "v1.2.2" + revision = "ffdc059bfe9ce6a4e144ba849dbedead332c6053" + version = "v1.3.0" [[projects]] - digest = "1:3ddca2bd5496c6922a2a9e636530e178a43c2a534ea6634211acdc7d10222794" name = "github.com/tidwall/gjson" packages = ["."] - pruneopts = "" - revision = "1e3f6aeaa5bad08d777ea7807b279a07885dd8b2" - version = "v1.1.3" + revision = "eee0b6226f0d1db2675a176fdfaa8419bcad4ca8" + version = "v1.2.1" [[projects]] - branch = "master" - digest = "1:4db4f92bb9cb04cfc4fccb36aba2598b02a988008c4cc0692b241214ad8ac96e" name = "github.com/tidwall/match" packages = ["."] - pruneopts = "" - revision = "1731857f09b1f38450e2c12409748407822dc6be" + revision = "33827db735fff6510490d69a8622612558a557ed" + version = "v1.0.1" [[projects]] branch = "master" - digest = "1:857a9ecd5cb13379ecc8f798f6e6b6b574c98b9355657d91e068275f1120aaf7" + name = "github.com/tidwall/pretty" + packages = ["."] + revision = "1166b9ac2b65e46a43d8618d30d1554f4652d49b" + +[[projects]] name = "github.com/valyala/bytebufferpool" packages = ["."] - pruneopts = "" revision = "e746df99fe4a3986f4d4f79e13c1e0117ce9c2f7" + version = "v1.0.0" [[projects]] - branch = "master" - digest = "1:bf6f8915c0338e875383cb7fdebd58a4d360a232f461d9a029d7ccb12f90c5d7" name = "github.com/valyala/fasttemplate" packages = ["."] - pruneopts = "" - revision = "dcecefd839c4193db0d35b88ec65b4c12d360ab0" + revision = "8b5e4e491ab636663841c42ea3c5a9adebabaf36" + version = "v1.0.1" [[projects]] - digest = "1:afc0b8068986a01e2d8f449917829753a54f6bd4d1265c2b4ad9cba75560020f" name = "github.com/xanzy/ssh-agent" packages = ["."] - pruneopts = "" - revision = "640f0ab560aeb89d523bb6ac322b1244d5c3796c" - version = "v0.2.0" + revision = "6a3e2ff9e7c564f36873c2e36413f634534f1c44" + version = "v0.2.1" + +[[projects]] + name = "go.opencensus.io" + packages = [ + ".", + "internal", + "internal/tagencoding", + "metric/metricdata", + "metric/metricproducer", + "plugin/ochttp", + "plugin/ochttp/propagation/b3", + "resource", + "stats", + "stats/internal", + "stats/view", + "tag", + "trace", + "trace/internal", + "trace/propagation", + "trace/tracestate" + ] + revision = "df6e2001952312404b06f5f6f03fcb4aec1648e5" + version = "v0.21.0" [[projects]] branch = "master" - digest = "1:53c4b75f22ea7757dea07eae380ea42de547ae6865a5e3b41866754a8a8219c9" name = "golang.org/x/crypto" packages = [ "argon2", @@ -587,24 +543,42 @@ "ed25519/internal/edwards25519", "internal/chacha20", "internal/subtle", + "md4", "openpgp", "openpgp/armor", "openpgp/elgamal", "openpgp/errors", "openpgp/packet", "openpgp/s2k", + "pbkdf2", "poly1305", "ssh", "ssh/agent", "ssh/knownhosts", - "ssh/terminal", + "ssh/terminal" + ] + revision = "c05e17bb3b2dca130fc919668a96b4bec9eb9442" + +[[projects]] + branch = "master" + name = "golang.org/x/exp" + packages = [ + "apidiff", + "cmd/apidiff" + ] + revision = "8c7d1c524af6eaf18eadc4f57955a748e7001194" + +[[projects]] + branch = "master" + name = "golang.org/x/lint" + packages = [ + ".", + "golint" ] - pruneopts = "" - revision = "f027049dab0ad238e394a753dba2d14753473a04" + revision = "959b441ac422379a43da2230f62be024250818b0" [[projects]] branch = "master" - digest = "1:67c2d940f2d5c017ef88e9847709dca9b38d5fe82f1e33fb42ace515219f22f1" name = "golang.org/x/net" packages = [ "context", @@ -613,44 +587,43 @@ "http2", "http2/hpack", "idna", + "internal/timeseries", + "publicsuffix", + "trace" ] - pruneopts = "" - revision = "f9ce57c11b242f0f1599cf25c89d8cb02c45295a" + revision = "4829fb13d2c62012c17688fa7f629f371014946d" [[projects]] branch = "master" - digest = "1:a8172cf4304ef01f0c7dd634c331880247d10f9e28b041821f2321a8e4bb3b7c" name = "golang.org/x/oauth2" packages = [ ".", "google", "internal", "jws", - "jwt", + "jwt" ] - pruneopts = "" - revision = "3d292e4d0cdc3a0113e6d207bb137145ef1de42f" + revision = "9f3314589c9a9136388751d9adae6b0ed400978a" [[projects]] branch = "master" - digest = "1:6d9c86494d97c7fc8bbab029c17fc0ce9dc517aaae92a25d790d01b0e8732832" name = "golang.org/x/sys" packages = [ "cpu", "unix", - "windows", + "windows" ] - pruneopts = "" - revision = "904bdc257025c7b3f43c19360ad3ab85783fad78" + revision = "16072639606ea9e22c7d86e4cbd6af6314f4193c" [[projects]] - digest = "1:5acd3512b047305d49e8763eef7ba423901e85d5dd2fd1e71778a0ea8de10bd4" name = "golang.org/x/text" packages = [ "collate", "collate/build", "internal/colltab", "internal/gen", + "internal/language", + "internal/language/compact", "internal/tag", "internal/triegen", "internal/ucd", @@ -661,34 +634,57 @@ "unicode/cldr", "unicode/norm", "unicode/rangetable", - "width", + "width" ] - pruneopts = "" - revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0" - version = "v0.3.0" + revision = "c942b20a5d85b458c4dce1589326051d85e25d6d" + version = "v0.3.1" [[projects]] branch = "master" - digest = "1:55a681cb66f28755765fa5fa5104cbd8dc85c55c02d206f9f89566451e3fe1aa" name = "golang.org/x/time" packages = ["rate"] - pruneopts = "" - revision = "fbb02b2291d28baffd63558aa44b4b56f178d650" + revision = "9d24e82272b4f38b78bc8cff74fa936d31ccd8ef" [[projects]] branch = "master" - digest = "1:c73b8c7b4bfb2e69de55a3549d6a8089d7757899cc5b62ff1c180bd76e9ee7f6" name = "golang.org/x/tools" packages = [ + "cmd/goimports", "go/ast/astutil", + "go/buildutil", + "go/gcexportdata", + "go/internal/cgo", + "go/internal/gcimporter", + "go/internal/packagesdriver", + "go/loader", + "go/packages", + "go/types/typeutil", "imports", "internal/fastwalk", + "internal/gopathwalk", + "internal/module", + "internal/semver" + ] + revision = "36563e24a2627da92566d43aa1c7a2dd895fc60d" + +[[projects]] + name = "google.golang.org/api" + packages = [ + "gensupport", + "googleapi", + "googleapi/internal/uritemplates", + "googleapi/transport", + "internal", + "iterator", + "option", + "storage/v1", + "transport/http", + "transport/http/internal/propagation" ] - pruneopts = "" - revision = "ca6481ae56504398949d597084558e50ad07117a" + revision = "0cbcb99a9ea0c8023c794b2693cbe1def82ed4d7" + version = "v0.3.2" [[projects]] - digest = "1:c1771ca6060335f9768dff6558108bc5ef6c58506821ad43377ee23ff059e472" name = "google.golang.org/appengine" packages = [ ".", @@ -700,41 +696,148 @@ "internal/modules", "internal/remote_api", "internal/urlfetch", - "urlfetch", + "urlfetch" ] - pruneopts = "" - revision = "b1f26356af11148e710935ed1ac8a7f5702c7612" - version = "v1.1.0" + revision = "54a98f90d1c46b7731eb8fb305d2a321c30ef610" + version = "v1.5.0" + +[[projects]] + branch = "master" + name = "google.golang.org/genproto" + packages = [ + "googleapis/api/annotations", + "googleapis/iam/v1", + "googleapis/rpc/code", + "googleapis/rpc/status" + ] + revision = "e7d98fc518a78c9f8b5ee77be7b0b317475d89e1" + +[[projects]] + name = "google.golang.org/grpc" + packages = [ + ".", + "balancer", + "balancer/base", + "balancer/roundrobin", + "binarylog/grpc_binarylog_v1", + "codes", + "connectivity", + "credentials", + "credentials/internal", + "encoding", + "encoding/proto", + "grpclog", + "internal", + "internal/backoff", + "internal/balancerload", + "internal/binarylog", + "internal/channelz", + "internal/envconfig", + "internal/grpcrand", + "internal/grpcsync", + "internal/syscall", + "internal/transport", + "keepalive", + "metadata", + "naming", + "peer", + "resolver", + "resolver/dns", + "resolver/passthrough", + "stats", + "status", + "tap" + ] + revision = "25c4f928eaa6d96443009bd842389fb4fa48664e" + version = "v1.20.1" [[projects]] - digest = "1:75fb3fcfc73a8c723efde7777b40e8e8ff9babf30d8c56160d01beffea8a95a6" name = "gopkg.in/inf.v0" packages = ["."] - pruneopts = "" revision = "d2d2541c53f18d2a059457998ce2876cc8e67cbf" version = "v0.9.1" [[projects]] - digest = "1:6715e0bec216255ab784fe04aa4d5a0a626ae07a3a209080182e469bc142761a" + name = "gopkg.in/ini.v1" + packages = ["."] + revision = "c85607071cf08ca1adaf48319cd1aa322e81d8c1" + version = "v1.42.0" + +[[projects]] + name = "gopkg.in/jcmturner/aescts.v1" + packages = ["."] + revision = "f6abebb3171c4c1b1fea279cb7c7325020a26290" + version = "v1.0.1" + +[[projects]] + name = "gopkg.in/jcmturner/dnsutils.v1" + packages = ["."] + revision = "13eeb8d49ffb74d7a75784c35e4d900607a3943c" + version = "v1.0.1" + +[[projects]] + name = "gopkg.in/jcmturner/gokrb5.v5" + packages = [ + "asn1tools", + "client", + "config", + "credentials", + "crypto", + "crypto/common", + "crypto/etype", + "crypto/rfc3961", + "crypto/rfc3962", + "crypto/rfc4757", + "crypto/rfc8009", + "gssapi", + "iana", + "iana/addrtype", + "iana/adtype", + "iana/asnAppTag", + "iana/chksumtype", + "iana/errorcode", + "iana/etypeID", + "iana/flags", + "iana/keyusage", + "iana/msgtype", + "iana/nametype", + "iana/patype", + "kadmin", + "keytab", + "krberror", + "messages", + "mstypes", + "pac", + "types" + ] + revision = "32ba44ca5b42f17a4a9f33ff4305e70665a1bc0f" + version = "v5.3.0" + +[[projects]] + name = "gopkg.in/jcmturner/rpc.v0" + packages = ["ndr"] + revision = "4480c480c9cd343b54b0acb5b62261cbd33d7adf" + version = "v0.0.2" + +[[projects]] name = "gopkg.in/src-d/go-billy.v4" packages = [ ".", "helper/chroot", "helper/polyfill", "osfs", - "util", + "util" ] - pruneopts = "" - revision = "83cf655d40b15b427014d7875d10850f96edba14" - version = "v4.2.0" + revision = "982626487c60a5252e7d0b695ca23fb0fa2fd670" + version = "v4.3.0" [[projects]] - digest = "1:d014bc54441ee96e8306ea6a767264864d2fd0898962a9dee152e992b2e672da" name = "gopkg.in/src-d/go-git.v4" packages = [ ".", "config", "internal/revision", + "internal/url", "plumbing", "plumbing/cache", "plumbing/filemode", @@ -771,31 +874,53 @@ "utils/merkletrie/filesystem", "utils/merkletrie/index", "utils/merkletrie/internal/frame", - "utils/merkletrie/noder", + "utils/merkletrie/noder" ] - pruneopts = "" - revision = "3bd5e82b2512d85becae9677fa06b5a973fd4cfb" - version = "v4.5.0" + revision = "aa6f288c256ff8baf8a7745546a9752323dc0d89" + version = "v4.11.0" [[projects]] - digest = "1:ceec7e96590fb8168f36df4795fefe17051d4b0c2acc7ec4e260d8138c4dafac" name = "gopkg.in/warnings.v0" packages = ["."] - pruneopts = "" revision = "ec4a0fea49c7b46c2aeb0b51aac55779c607e52b" version = "v0.1.2" [[projects]] - digest = "1:f0620375dd1f6251d9973b5f2596228cc8042e887cd7f827e4220bc1ce8c30e2" name = "gopkg.in/yaml.v2" packages = ["."] - pruneopts = "" - revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183" - version = "v2.2.1" + revision = "51d6538a90f86fe93ac480b35f37b2be17fef232" + version = "v2.2.2" + +[[projects]] + name = "honnef.co/go/tools" + packages = [ + "arg", + "callgraph", + "callgraph/static", + "cmd/staticcheck", + "config", + "deprecated", + "functions", + "internal/sharedcheck", + "lint", + "lint/lintdsl", + "lint/lintutil", + "lint/lintutil/format", + "simple", + "ssa", + "ssa/ssautil", + "ssautil", + "staticcheck", + "staticcheck/vrp", + "stylecheck", + "unused", + "version" + ] + revision = "95959eaf5e3c41c66151dcfd91779616b84077a8" + version = "2019.1.1" [[projects]] branch = "release-1.12" - digest = "1:ed04c5203ecbf6358fb6a774b0ecd40ea992d6dcc42adc1d3b7cf9eceb66b6c8" name = "k8s.io/api" packages = [ "admissionregistration/v1alpha1", @@ -828,14 +953,12 @@ "settings/v1alpha1", "storage/v1", "storage/v1alpha1", - "storage/v1beta1", + "storage/v1beta1" ] - pruneopts = "" - revision = "475331a8afff5587f47d0470a93f79c60c573c03" + revision = "6db15a15d2d3874a6c3ddb2140ac9f3bc7058428" [[projects]] branch = "release-1.12" - digest = "1:5899da40e41bcc8c1df101b72954096bba9d85b763bc17efc846062ccc111c7b" name = "k8s.io/apimachinery" packages = [ "pkg/api/errors", @@ -883,14 +1006,12 @@ "pkg/watch", "third_party/forked/golang/json", "third_party/forked/golang/netutil", - "third_party/forked/golang/reflect", + "third_party/forked/golang/reflect" ] - pruneopts = "" - revision = "f71dbbc36e126f5a371b85f6cca96bc8c57db2b6" + revision = "01f179d85dbce0f2e0e4351a92394b38694b7cae" [[projects]] branch = "release-9.0" - digest = "1:77bf3d9f18ec82e08ac6c4c7e2d9d1a2ef8d16b25d3ff72fcefcf9256d751573" name = "k8s.io/client-go" packages = [ "discovery", @@ -984,6 +1105,7 @@ "tools/pager", "tools/reference", "tools/remotecommand", + "tools/watch", "transport", "transport/spdy", "util/buffer", @@ -995,14 +1117,12 @@ "util/integer", "util/jsonpath", "util/retry", - "util/workqueue", + "util/workqueue" ] - pruneopts = "" - revision = "13596e875accbd333e0b5bd5fd9462185acd9958" + revision = "77e032213d34c856222b4d4647c1c175ba8d22b9" [[projects]] branch = "release-1.12" - digest = "1:e6fffdf0dfeb0d189a7c6d735e76e7564685d3b6513f8b19d3651191cb6b084b" name = "k8s.io/code-generator" packages = [ "cmd/client-gen", @@ -1021,14 +1141,12 @@ "cmd/lister-gen", "cmd/lister-gen/args", "cmd/lister-gen/generators", - "pkg/util", + "pkg/util" ] - pruneopts = "" - revision = "3dcf91f64f638563e5106f21f50c31fa361c918d" + revision = "b1289fc74931d4b6b04bd1a259acfc88a2cb0a66" [[projects]] branch = "master" - digest = "1:74eb4556b4379d0d76a3a5ada504ff6c5ef76cd85cbf1347cb649e4c1cc8ca9e" name = "k8s.io/gengo" packages = [ "args", @@ -1037,95 +1155,34 @@ "generator", "namer", "parser", - "types", + "types" ] - pruneopts = "" - revision = "c42f3cdacc394f43077ff17e327d1b351c0304e4" + revision = "e17681d19d3ac4837a019ece36c2a0ec31ffe985" + +[[projects]] + name = "k8s.io/klog" + packages = ["."] + revision = "e531227889390a39d9533dde61f590fe9f4b0035" + version = "v0.3.0" [[projects]] branch = "master" - digest = "1:951bc2047eea6d316a17850244274554f26fd59189360e45f4056b424dadf2c1" name = "k8s.io/kube-openapi" packages = [ "pkg/common", - "pkg/util/proto", + "pkg/util/proto" ] - pruneopts = "" - revision = "e3762e86a74c878ffed47484592986685639c2cd" + revision = "6b3d3b2d5666c5912bab8b7bf26bf50f75a8f887" + +[[projects]] + branch = "master" + name = "k8s.io/utils" + packages = ["pointer"] + revision = "21c4ce38f2a793ec01e925ddc31216500183b773" [solve-meta] analyzer-name = "dep" analyzer-version = 1 - input-imports = [ - "github.com/Knetic/govaluate", - "github.com/argoproj/pkg/errors", - "github.com/argoproj/pkg/file", - "github.com/argoproj/pkg/humanize", - "github.com/argoproj/pkg/json", - "github.com/argoproj/pkg/kube/cli", - "github.com/argoproj/pkg/kube/errors", - "github.com/argoproj/pkg/s3", - "github.com/argoproj/pkg/stats", - "github.com/argoproj/pkg/strftime", - "github.com/argoproj/pkg/time", - "github.com/evanphx/json-patch", - "github.com/fsnotify/fsnotify", - "github.com/ghodss/yaml", - "github.com/go-openapi/spec", - "github.com/gorilla/websocket", - "github.com/pkg/errors", - "github.com/prometheus/client_golang/prometheus", - "github.com/prometheus/client_golang/prometheus/promhttp", - "github.com/sirupsen/logrus", - "github.com/spf13/cobra", - "github.com/stretchr/testify/assert", - "github.com/stretchr/testify/mock", - "github.com/stretchr/testify/suite", - "github.com/tidwall/gjson", - "github.com/valyala/fasttemplate", - "golang.org/x/crypto/ssh", - "gopkg.in/src-d/go-git.v4", - "gopkg.in/src-d/go-git.v4/plumbing/transport", - "gopkg.in/src-d/go-git.v4/plumbing/transport/http", - "gopkg.in/src-d/go-git.v4/plumbing/transport/ssh", - "k8s.io/api/core/v1", - "k8s.io/apimachinery/pkg/api/errors", - "k8s.io/apimachinery/pkg/api/resource", - "k8s.io/apimachinery/pkg/apis/meta/v1", - "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured", - "k8s.io/apimachinery/pkg/fields", - "k8s.io/apimachinery/pkg/labels", - "k8s.io/apimachinery/pkg/runtime", - "k8s.io/apimachinery/pkg/runtime/schema", - "k8s.io/apimachinery/pkg/runtime/serializer", - "k8s.io/apimachinery/pkg/selection", - "k8s.io/apimachinery/pkg/types", - "k8s.io/apimachinery/pkg/util/clock", - "k8s.io/apimachinery/pkg/util/runtime", - "k8s.io/apimachinery/pkg/util/validation", - "k8s.io/apimachinery/pkg/util/wait", - "k8s.io/apimachinery/pkg/watch", - "k8s.io/client-go/discovery", - "k8s.io/client-go/discovery/fake", - "k8s.io/client-go/dynamic", - "k8s.io/client-go/informers/internalinterfaces", - "k8s.io/client-go/kubernetes", - "k8s.io/client-go/kubernetes/fake", - "k8s.io/client-go/plugin/pkg/client/auth/azure", - "k8s.io/client-go/plugin/pkg/client/auth/gcp", - "k8s.io/client-go/plugin/pkg/client/auth/oidc", - "k8s.io/client-go/rest", - "k8s.io/client-go/testing", - "k8s.io/client-go/tools/cache", - "k8s.io/client-go/tools/clientcmd", - "k8s.io/client-go/tools/remotecommand", - "k8s.io/client-go/util/flowcontrol", - "k8s.io/client-go/util/workqueue", - "k8s.io/code-generator/cmd/client-gen", - "k8s.io/code-generator/cmd/deepcopy-gen", - "k8s.io/code-generator/cmd/informer-gen", - "k8s.io/code-generator/cmd/lister-gen", - "k8s.io/kube-openapi/pkg/common", - ] + inputs-digest = "f32bcd98041871575601108af8703d15a31ac8b6c27338818fd2cb0033d9b01c" solver-name = "gps-cdcl" solver-version = 1 diff --git a/Gopkg.toml b/Gopkg.toml index facaf7e03133..be42e0bd7974 100644 --- a/Gopkg.toml +++ b/Gopkg.toml @@ -55,3 +55,18 @@ required = [ name = "github.com/Azure/go-autorest" revision = "1ff28809256a84bb6966640ff3d0371af82ccba4" +[[constraint]] + name = "github.com/colinmarc/hdfs" + revision = "48eb8d6c34a97ffc73b406356f0f2e1c569b42a5" + +[[constraint]] + name = "gopkg.in/jcmturner/gokrb5.v5" + version = "5.3.0" + +[[constraint]] + name = "cloud.google.com/go" + version = "0.26.0" + +[[constraint]] + name = "google.golang.org/api" + version = "0.3.2" diff --git a/Jenkinsfile b/Jenkinsfile new file mode 100644 index 000000000000..5314105581dd --- /dev/null +++ b/Jenkinsfile @@ -0,0 +1,95 @@ +#!groovy + +def GIT_BRANCH = '' +def IMAGE_REF = '' +def IMAGE_TAG = '' +def NOTIFIER_IMAGE = 'argo-rest-notifier' +def VERSION = '' +def NAMESPACE = '' + +def runUtilityCommand(buildCommand) { + // Run an arbitrary command inside the docker builder image + sh "docker run -v ${pwd()}/dist:/go/src/github.com/cyrusbiotechnology/argo/dist --rm builder-base:latest ${buildCommand}" +} + +pipeline { + agent any + stages { + stage('Checkout') { + steps { + checkout scm + sh 'git submodule update --init --recursive' + sh 'git rev-parse HEAD > git-sha.txt' + script { + GIT_COMMIT = readFile 'git-sha.txt' + GIT_SHA = git.getCommit() + IMAGE_REF=docker2.imageRef() + IMAGE_TAG=IMAGE_REF.split(':').last() + GIT_BRANCH = env.BRANCH_NAME.replace('/', '').replace('_', '').replace('-', '') + + def baseVersionTag = readFile "VERSION" + baseVersionTag = baseVersionTag.trim(); + VERSION = "${baseVersionTag}-cyrus-${GIT_BRANCH}" + + println "Version tag for this build is ${VERSION}" + } + } + } + + stage('build utility container') { + steps { + sh "docker build -t builder-base --target builder-base ." + } + } + + + stage('run tests') { + steps { + runUtilityCommand("go test ./...") + } + } + + + stage('build controller') { + steps { + sh "docker build -t workflow-controller:${VERSION} --target workflow-controller ." + } + } + + stage('build executor') { + steps { + sh "docker build -t argoexec:${VERSION} --target argoexec ." + } + } + + + + + stage('build Linux and MacOS CLIs') { + steps { + runUtilityCommand("make cli CGO_ENABLED=0 LDFLAGS='-extldflags \"-static\"' ARGO_CLI_NAME=argo-linux-amd64") + runUtilityCommand("make cli CGO_ENABLED=0 LDFLAGS='-extldflags \"-static\"' ARGO_CLI_NAME=argo-darwin-amd64") + } + } + + stage('push containers to GCR') { + + steps { + script { docker2.push("workflow-controller:${VERSION}", ["workflow-controller:${VERSION}"]) } + script { docker2.push("argoexec:${VERSION}", ["argoexec:${VERSION}"]) } + + } + + } + + stage('push CLI to artifactory') { + steps { + withCredentials([usernamePassword(credentialsId: 'Artifactory', usernameVariable: 'ARTI_NAME', passwordVariable: 'ARTI_PASS')]) { + runUtilityCommand("curl -u ${ARTI_NAME}:${ARTI_PASS} -T /go/src/github.com/cyrusbiotechnology/argo/dist/argo-darwin-amd64 https://cyrusbio.jfrog.io/cyrusbio/argo-cli/argo-mac-${VERSION}") + runUtilityCommand("curl -u ${ARTI_NAME}:${ARTI_PASS} -T /go/src/github.com/cyrusbiotechnology/argo/dist/argo-linux-amd64 https://cyrusbio.jfrog.io/cyrusbio/argo-cli/argo-linux-${VERSION}") + } + } + } + + } + } diff --git a/Makefile b/Makefile index 73d2fa25338b..db568e218a91 100644 --- a/Makefile +++ b/Makefile @@ -1,4 +1,4 @@ -PACKAGE=github.com/argoproj/argo +PACKAGE=github.com/cyrusbiotechnology/argo CURRENT_DIR=$(shell pwd) DIST_DIR=${CURRENT_DIR}/dist ARGO_CLI_NAME=argo @@ -9,13 +9,13 @@ GIT_COMMIT=$(shell git rev-parse HEAD) GIT_TAG=$(shell if [ -z "`git status --porcelain`" ]; then git describe --exact-match --tags HEAD 2>/dev/null; fi) GIT_TREE_STATE=$(shell if [ -z "`git status --porcelain`" ]; then echo "clean" ; else echo "dirty"; fi) -BUILDER_IMAGE=argo-builder -# NOTE: the volume mount of ${DIST_DIR}/pkg below is optional and serves only -# to speed up subsequent builds by caching ${GOPATH}/pkg between builds. -BUILDER_CMD=docker run --rm \ - -v ${CURRENT_DIR}:/root/go/src/${PACKAGE} \ - -v ${DIST_DIR}/pkg:/root/go/pkg \ - -w /root/go/src/${PACKAGE} ${BUILDER_IMAGE} +# docker image publishing options +DOCKER_PUSH=false +IMAGE_TAG=latest +# perform static compilation +STATIC_BUILD=true +# build development images +DEV_IMAGE=false override LDFLAGS += \ -X ${PACKAGE}.version=${VERSION} \ @@ -23,22 +23,16 @@ override LDFLAGS += \ -X ${PACKAGE}.gitCommit=${GIT_COMMIT} \ -X ${PACKAGE}.gitTreeState=${GIT_TREE_STATE} -# docker image publishing options -DOCKER_PUSH=false -IMAGE_TAG=latest +ifeq (${STATIC_BUILD}, true) +override LDFLAGS += -extldflags "-static" +endif ifneq (${GIT_TAG},) IMAGE_TAG=${GIT_TAG} override LDFLAGS += -X ${PACKAGE}.gitTag=${GIT_TAG} endif -ifneq (${IMAGE_NAMESPACE},) -override LDFLAGS += -X ${PACKAGE}/cmd/argo/commands.imageNamespace=${IMAGE_NAMESPACE} -endif -ifneq (${IMAGE_TAG},) -override LDFLAGS += -X ${PACKAGE}/cmd/argo/commands.imageTag=${IMAGE_TAG} -endif -ifeq (${DOCKER_PUSH},true) +ifeq (${DOCKER_PUSH}, true) ifndef IMAGE_NAMESPACE $(error IMAGE_NAMESPACE must be set to push images (e.g. IMAGE_NAMESPACE=argoproj)) endif @@ -50,72 +44,85 @@ endif # Build the project .PHONY: all -all: cli cli-image controller-image executor-image +all: cli controller-image executor-image -.PHONY: builder -builder: - docker build -t ${BUILDER_IMAGE} -f Dockerfile-builder . +.PHONY: builder-image +builder-image: + docker build -t $(IMAGE_PREFIX)argo-ci-builder:$(IMAGE_TAG) --target builder . .PHONY: cli cli: - CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${ARGO_CLI_NAME} ./cmd/argo + go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/${ARGO_CLI_NAME} ./cmd/argo + +.PHONY: cli-linux-amd64 +cli-linux-amd64: + CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argo-linux-amd64 ./cmd/argo + +.PHONY: cli-linux-ppc64le +cli-linux-ppc64le: + CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argo-linux-ppc64le ./cmd/argo + +.PHONY: cli-linux-s390x +cli-linux-s390x: + CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argo-linux-s390x ./cmd/argo .PHONY: cli-linux -cli-linux: builder - ${BUILDER_CMD} make cli \ - CGO_ENABLED=0 \ - IMAGE_TAG=$(IMAGE_TAG) \ - IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) \ - LDFLAGS='-extldflags "-static"' \ - ARGO_CLI_NAME=argo-linux-amd64 +cli-linux: cli-linux-amd64 cli-linux-ppc64le cli-linux-s390x .PHONY: cli-darwin -cli-darwin: builder - ${BUILDER_CMD} make cli \ - GOOS=darwin \ - IMAGE_TAG=$(IMAGE_TAG) \ - IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) \ - ARGO_CLI_NAME=argo-darwin-amd64 +cli-darwin: + CGO_ENABLED=0 GOOS=darwin go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argo-darwin-amd64 ./cmd/argo .PHONY: cli-windows -cli-windows: builder - ${BUILDER_CMD} make cli \ - GOARCH=amd64 \ - GOOS=windows \ - IMAGE_TAG=$(IMAGE_TAG) \ - IMAGE_NAMESPACE=$(IMAGE_NAMESPACE) \ - LDFLAGS='-extldflags "-static"' \ - ARGO_CLI_NAME=argo-windows-amd64 - -.PHONY: controller -controller: - go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/workflow-controller ./cmd/workflow-controller +cli-windows: + CGO_ENABLED=0 GOARCH=amd64 GOOS=windows go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argo-windows-amd64 ./cmd/argo .PHONY: cli-image -cli-image: cli-linux - docker build -t $(IMAGE_PREFIX)argocli:$(IMAGE_TAG) -f Dockerfile-cli . +cli-image: + docker build -t $(IMAGE_PREFIX)argocli:$(IMAGE_TAG) --target argocli . @if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argocli:$(IMAGE_TAG) ; fi -.PHONY: controller-linux -controller-linux: builder - ${BUILDER_CMD} make controller +.PHONY: controller +controller: + CGO_ENABLED=0 go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/workflow-controller ./cmd/workflow-controller .PHONY: controller-image -controller-image: controller-linux - docker build -t $(IMAGE_PREFIX)workflow-controller:$(IMAGE_TAG) -f Dockerfile-workflow-controller . +controller-image: +ifeq ($(DEV_IMAGE), true) + CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -v -i -ldflags '${LDFLAGS}' -o workflow-controller ./cmd/workflow-controller + docker build -t $(IMAGE_PREFIX)workflow-controller:$(IMAGE_TAG) -f Dockerfile.workflow-controller-dev . + rm -f workflow-controller +else + docker build -t $(IMAGE_PREFIX)workflow-controller:$(IMAGE_TAG) --target workflow-controller . +endif @if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)workflow-controller:$(IMAGE_TAG) ; fi .PHONY: executor executor: go build -v -i -ldflags '${LDFLAGS}' -o ${DIST_DIR}/argoexec ./cmd/argoexec -.PHONY: executor-linux -executor-linux: builder - ${BUILDER_CMD} make executor - +.PHONY: executor-base-image +executor-base-image: + docker build -t argoexec-base --target argoexec-base . + +# The DEV_IMAGE versions of controller-image and executor-image are speed optimized development +# builds of workflow-controller and argoexec images respectively. It allows for faster image builds +# by re-using the golang build cache of the desktop environment. Ideally, we would not need extra +# Dockerfiles for these, and the targets would be defined as new targets in the main Dockerfile, but +# intelligent skipping of docker build stages requires DOCKER_BUILDKIT=1 enabled, which not all +# docker daemons support (including the daemon currently used by minikube). +# TODO: move these targets to the main Dockerfile once DOCKER_BUILDKIT=1 is more pervasive. +# NOTE: have to output ouside of dist directory since dist is under .dockerignore .PHONY: executor-image -executor-image: executor-linux - docker build -t $(IMAGE_PREFIX)argoexec:$(IMAGE_TAG) -f Dockerfile-argoexec . +ifeq ($(DEV_IMAGE), true) +executor-image: executor-base-image + CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -v -i -ldflags '${LDFLAGS}' -o argoexec ./cmd/argoexec + docker build -t $(IMAGE_PREFIX)argoexec:$(IMAGE_TAG) -f Dockerfile.argoexec-dev . + rm -f argoexec +else +executor-image: + docker build -t $(IMAGE_PREFIX)argoexec:$(IMAGE_TAG) --target argoexec . +endif @if [ "$(DOCKER_PUSH)" = "true" ] ; then docker push $(IMAGE_PREFIX)argoexec:$(IMAGE_TAG) ; fi .PHONY: lint @@ -126,8 +133,8 @@ lint: test: go test ./... -.PHONY: update-codegen -update-codegen: +.PHONY: codegen +codegen: ./hack/update-codegen.sh ./hack/update-openapigen.sh go run ./hack/gen-openapi-spec/main.go ${VERSION} > ${CURRENT_DIR}/api/openapi-spec/swagger.json @@ -140,8 +147,8 @@ verify-codegen: go run ./hack/gen-openapi-spec/main.go ${VERSION} > ${CURRENT_DIR}/dist/swagger.json diff ${CURRENT_DIR}/dist/swagger.json ${CURRENT_DIR}/api/openapi-spec/swagger.json -.PHONY: update-manifests -update-manifests: +.PHONY: manifests +manifests: ./hack/update-manifests.sh .PHONY: clean @@ -152,10 +159,22 @@ clean: precheckin: test lint verify-codegen .PHONY: release-precheck -release-precheck: precheckin +release-precheck: manifests codegen precheckin @if [ "$(GIT_TREE_STATE)" != "clean" ]; then echo 'git tree state is $(GIT_TREE_STATE)' ; exit 1; fi @if [ -z "$(GIT_TAG)" ]; then echo 'commit must be tagged to perform release' ; exit 1; fi @if [ "$(GIT_TAG)" != "v$(VERSION)" ]; then echo 'git tag ($(GIT_TAG)) does not match VERSION (v$(VERSION))'; exit 1; fi +.PHONY: release-clis +release-clis: cli-image + docker build --iidfile /tmp/argo-cli-build --target argo-build --build-arg MAKE_TARGET="cli-darwin cli-windows" . + docker create --name tmp-cli `cat /tmp/argo-cli-build` + mkdir -p ${DIST_DIR} + docker cp tmp-cli:/go/src/github.com/cyrusbiotechnology/argo/dist/argo-darwin-amd64 ${DIST_DIR}/argo-darwin-amd64 + docker cp tmp-cli:/go/src/github.com/cyrusbiotechnology/argo/dist/argo-windows-amd64 ${DIST_DIR}/argo-windows-amd64 + docker rm tmp-cli + docker create --name tmp-cli $(IMAGE_PREFIX)argocli:$(IMAGE_TAG) + docker cp tmp-cli:/bin/argo ${DIST_DIR}/argo-linux-amd64 + docker rm tmp-cli + .PHONY: release -release: release-precheck controller-image cli-darwin cli-linux cli-windows executor-image cli-image +release: release-precheck controller-image executor-image cli-image release-clis diff --git a/OWNERS b/OWNERS index 244615e3a069..585b9d1aa85c 100644 --- a/OWNERS +++ b/OWNERS @@ -5,3 +5,6 @@ approvers: - alexmt - edlee2121 - jessesuen + +reviewers: +- dtaniwaki diff --git a/README.md b/README.md index 06adc1001b90..1f3e8197e9aa 100644 --- a/README.md +++ b/README.md @@ -1,29 +1,47 @@ -# Argo - The Workflow Engine for Kubernetes +[![slack](https://img.shields.io/badge/slack-argoproj-brightgreen.svg?logo=slack)](https://argoproj.github.io/community/join-slack) + +# Argoproj - Get stuff done with Kubernetes ![Argo Image](argo.png) ## News -We are thrilled that BlackRock has developed an eventing framework for Argo and has decided to contribute it to the Argo Community. Please check out the new project and try [Argo Events](https://github.com/argoproj/argo-events)! +KubeCon 2018 in Seattle was the biggest KubeCon yet with 8000 developers attending. We connected with many existing and new Argoproj users and contributions, and gave away a lot of Argo T-shirts at our booth sponsored by Intuit! + +We were also super excited to see KubeCon presentations about Argo by Argo developers, users and partners. +* [CI/CD in Light Speed with K8s and Argo CD](https://www.youtube.com/watch?v=OdzH82VpMwI&feature=youtu.be) + * How Intuit uses Argo CD. +* [Automating Research Workflows at BlackRock](https://www.youtube.com/watch?v=ZK510prml8o&t=0s&index=169&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU) + * Why BlackRock created Argo Events and how they use it. +* [Machine Learning as Code](https://www.youtube.com/watch?v=VXrGp5er1ZE&t=0s&index=135&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU) + * How Kubeflow uses Argo Workflows as its core workflow engine and Argo CD to declaratively deploy ML pipelines and models. + +If you actively use Argo in your organization and your organization would be interested in participating in the Argo Community, please ask a representative to contact saradhi_sreegiriraju@intuit.com for additional information. -If you actively use Argo in your organization and believe that your organization may be interested in actively participating in the Argo Community, please ask a representative to contact saradhi_sreegiriraju@intuit.com for additional information. +## What is Argoproj? -## What is Argo? -Argo is an open source container-native workflow engine for getting work done on Kubernetes. Argo is implemented as a Kubernetes CRD (Custom Resource Definition). +Argoproj is a collection of tools for getting work done with Kubernetes. +* [Argo Workflows](https://github.com/cyrusbiotechnology/argo) - Container-native Workflow Engine +* [Argo CD](https://github.com/cyrusbiotechnology/argo-cd) - Declarative GitOps Continuous Delivery +* [Argo Events](https://github.com/cyrusbiotechnology/argo-events) - Event-based Dependency Manager + +## What is Argo Workflows? +Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD (Custom Resource Definition). * Define workflows where each step in the workflow is a container. * Model multi-step workflows as a sequence of tasks or capture the dependencies between tasks using a graph (DAG). -* Easily run compute intensive jobs for machine learning or data processing in a fraction of the time using Argo workflows on Kubernetes. +* Easily run compute intensive jobs for machine learning or data processing in a fraction of the time using Argo Workflows on Kubernetes. * Run CI/CD pipelines natively on Kubernetes without configuring complex software development products. -## Why Argo? -* Argo is designed from the ground up for containers without the overhead and limitations of legacy VM and server-based environments. -* Argo is cloud agnostic and can run on any kubernetes cluster. -* Argo with Kubernetes puts a cloud-scale supercomputer at your fingertips. +## Why Argo Workflows? +* Designed from the ground up for containers without the overhead and limitations of legacy VM and server-based environments. +* Cloud agnostic and can run on any Kubernetes cluster. +* Easily orchestrate highly parallel jobs on Kubernetes. +* Argo Workflows puts a cloud-scale supercomputer at your fingertips! ## Documentation * [Get started here](demo.md) -* [How to write Argo workflow specs](examples/README.md) +* [How to write Argo Workflow specs](examples/README.md) * [How to configure your artifact repository](ARTIFACT_REPO.md) ## Features @@ -53,23 +71,33 @@ As the Argo Community grows, we'd like to keep track of our users. Please send a Currently **officially** using Argo: +1. [Admiralty](https://admiralty.io/) 1. [Adobe](https://www.adobe.com/) +1. [Alibaba Cloud](https://www.alibabacloud.com/about) 1. [BlackRock](https://www.blackrock.com/) +1. [Canva](https://www.canva.com/) 1. [CoreFiling](https://www.corefiling.com/) 1. [Cratejoy](https://www.cratejoy.com/) 1. [Cyrus Biotechnology](https://cyrusbio.com/) 1. [Datadog](https://www.datadoghq.com/) +1. [Equinor](https://www.equinor.com/) +1. [Gardener](https://gardener.cloud/) 1. [Gladly](https://gladly.com/) +1. [GitHub](https://github.com/) 1. [Google](https://www.google.com/intl/en/about/our-company/) 1. [Interline Technologies](https://www.interline.io/blog/scaling-openstreetmap-data-workflows/) 1. [Intuit](https://www.intuit.com/) +1. [Karius](https://www.kariusdx.com/) 1. [KintoHub](https://www.kintohub.com/) 1. [Localytics](https://www.localytics.com/) 1. [NVIDIA](https://www.nvidia.com/) +1. [Preferred Networks](https://www.preferred-networks.jp/en/) +1. [Quantibio](http://quantibio.com/us/en/) 1. [SAP Hybris](https://cx.sap.com/) 1. [Styra](https://www.styra.com/) ## Community Blogs and Presentations +* [Running Argo Workflows Across Multiple Kubernetes Clusters](https://admiralty.io/blog/running-argo-workflows-across-multiple-kubernetes-clusters/) * [Open Source Model Management Roundup: Polyaxon, Argo, and Seldon](https://www.anaconda.com/blog/developer-blog/open-source-model-management-roundup-polyaxon-argo-and-seldon/) * [Producing 200 OpenStreetMap extracts in 35 minutes using a scalable data workflow](https://www.interline.io/blog/scaling-openstreetmap-data-workflows/) * [Argo integration review](http://dev.matt.hillsdon.net/2018/03/24/argo-integration-review.html) @@ -78,6 +106,5 @@ Currently **officially** using Argo: ## Project Resources * Argo GitHub: https://github.com/argoproj -* Argo Slack: [click here to join](https://join.slack.com/t/argoproj/shared_invite/enQtMzExODU3MzIyNjYzLTA5MTFjNjI0Nzg3NzNiMDZiNmRiODM4Y2M1NWQxOGYzMzZkNTc1YWVkYTZkNzdlNmYyZjMxNWI3NjY2MDc1MzI) * Argo website: https://argoproj.github.io/ -* Argo forum: https://groups.google.com/forum/#!forum/argoproj +* Argo Slack: [click here to join](https://join.slack.com/t/argoproj/shared_invite/enQtMzExODU3MzIyNjYzLTA5MTFjNjI0Nzg3NzNiMDZiNmRiODM4Y2M1NWQxOGYzMzZkNTc1YWVkYTZkNzdlNmYyZjMxNWI3NjY2MDc1MzI) diff --git a/ROADMAP.md b/ROADMAP.md index fbc7e0ddf529..cf31798b54a4 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -1,14 +1,13 @@ # Roadmap -## v2.2 - -### Proposed Items - -The following are candidate items for v2.2 release - -* Workflow composability - support for Jsonnet in CLI -* Queuing / Admission control - ability to limit number of concurrent workflows -* Scheduling - investigate k8s PriorityClasses and re-use in workflows -* Persistence - workflow history/state -* `argo run` to run workflows against clusters without a controller - #794 -* UI – filtering to improve performance +## v2.4 +* Persistence - support offloading of workflow state into database layer +* Large workflow support (enabled by persistence feature) +* Backlog and bug fixes + +## Proposed Items +* Argo API server +* Best effort workflow steps +* Template level finalizers +* Artifact loop aggregation +* Pod reclamation controls diff --git a/VERSION b/VERSION index c043eea7767e..914ec967116c 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -2.2.1 +2.6.0 \ No newline at end of file diff --git a/api/openapi-spec/swagger.json b/api/openapi-spec/swagger.json index 92b599d03dcd..258274021262 100644 --- a/api/openapi-spec/swagger.json +++ b/api/openapi-spec/swagger.json @@ -2,7 +2,7 @@ "swagger": "2.0", "info": { "title": "Argo", - "version": "v2.2.1" + "version": "v2.6.0" }, "paths": {}, "definitions": { @@ -58,6 +58,10 @@ "description": "From allows an artifact to reference an artifact from a previous step", "type": "string" }, + "gcs": { + "description": "GCS contains GCS artifact location details", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.GCSArtifact" + }, "git": { "description": "Git contains git artifact location details", "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.GitArtifact" @@ -66,6 +70,10 @@ "description": "GlobalName exports an output artifact to the global scope, making it available as '{{workflow.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts", "type": "string" }, + "hdfs": { + "description": "HDFS contains HDFS artifact location details", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.HDFSArtifact" + }, "http": { "description": "HTTP contains HTTP artifact location details", "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.HTTPArtifact" @@ -79,6 +87,10 @@ "description": "name of the artifact. must be unique within a template's inputs/outputs.", "type": "string" }, + "optional": { + "description": "Make Artifacts optional, if Artifacts doesn't generate or exist", + "type": "boolean" + }, "path": { "description": "Path is the container path to the artifact", "type": "string" @@ -104,10 +116,18 @@ "description": "Artifactory contains artifactory artifact location details", "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.ArtifactoryArtifact" }, + "gcs": { + "description": "GCS contains GCS artifact location details", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.GCSArtifact" + }, "git": { "description": "Git contains git artifact location details", "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.GitArtifact" }, + "hdfs": { + "description": "HDFS contains HDFS artifact location details", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.HDFSArtifact" + }, "http": { "description": "HTTP contains HTTP artifact location details", "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.HTTPArtifact" @@ -155,6 +175,17 @@ } } }, + "io.argoproj.workflow.v1alpha1.ContinueOn": { + "description": "ContinueOn defines if a workflow should continue even if a task or step fails/errors. It can be specified if the workflow should continue when the pod errors, fails or both.", + "properties": { + "error": { + "type": "boolean" + }, + "failed": { + "type": "boolean" + } + } + }, "io.argoproj.workflow.v1alpha1.DAGTask": { "description": "DAGTask represents a node in the graph during DAG execution", "required": [ @@ -166,6 +197,10 @@ "description": "Arguments are the parameter and artifact arguments to the template", "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.Arguments" }, + "continueOn": { + "description": "ContinueOn makes argo to proceed with the following step even if this step fails. Errors and Failed states can be specified", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.ContinueOn" + }, "dependencies": { "description": "Dependencies are name of other targets which this depends on", "type": "array", @@ -221,12 +256,96 @@ } } }, + "io.argoproj.workflow.v1alpha1.ExceptionCondition": { + "description": "ExceptionCondition is a container for defining an error or warning rule", + "required": [ + "name" + ], + "properties": { + "message": { + "type": "string" + }, + "name": { + "type": "string" + }, + "patternMatched": { + "type": "string" + }, + "patternUnmatched": { + "type": "string" + }, + "source": { + "type": "string" + } + } + }, + "io.argoproj.workflow.v1alpha1.ExceptionResult": { + "description": "ExceptionResult contains the results on an extended error or warning condition evaluation", + "required": [ + "name", + "message", + "podName", + "stepName" + ], + "properties": { + "message": { + "type": "string" + }, + "name": { + "type": "string" + }, + "podName": { + "type": "string" + }, + "stepName": { + "type": "string" + } + } + }, + "io.argoproj.workflow.v1alpha1.GCSArtifact": { + "description": "GCSArtifact is the location of a GCS artifact", + "required": [ + "bucket", + "credentialsSecret", + "key" + ], + "properties": { + "bucket": { + "type": "string" + }, + "credentialsSecret": { + "$ref": "#/definitions/io.k8s.api.core.v1.SecretKeySelector" + }, + "key": { + "type": "string" + } + } + }, + "io.argoproj.workflow.v1alpha1.GCSBucket": { + "description": "GCSBucket contains the access information required for acting with a GCS bucket", + "required": [ + "bucket", + "credentialsSecret" + ], + "properties": { + "bucket": { + "type": "string" + }, + "credentialsSecret": { + "$ref": "#/definitions/io.k8s.api.core.v1.SecretKeySelector" + } + } + }, "io.argoproj.workflow.v1alpha1.GitArtifact": { "description": "GitArtifact is the location of an git artifact", "required": [ "repo" ], "properties": { + "insecureIgnoreHostKey": { + "description": "InsecureIgnoreHostKey disables SSH strict host key checking during git clone", + "type": "boolean" + }, "passwordSecret": { "description": "PasswordSecret is the secret selector to the repository password", "$ref": "#/definitions/io.k8s.api.core.v1.SecretKeySelector" @@ -249,6 +368,130 @@ } } }, + "io.argoproj.workflow.v1alpha1.HDFSArtifact": { + "description": "HDFSArtifact is the location of an HDFS artifact", + "required": [ + "addresses", + "path" + ], + "properties": { + "addresses": { + "description": "Addresses is accessible addresses of HDFS name nodes", + "type": "array", + "items": { + "type": "string" + } + }, + "force": { + "description": "Force copies a file forcibly even if it exists (default: false)", + "type": "boolean" + }, + "hdfsUser": { + "description": "HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used.", + "type": "string" + }, + "krbCCacheSecret": { + "description": "KrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos.", + "$ref": "#/definitions/io.k8s.api.core.v1.SecretKeySelector" + }, + "krbConfigConfigMap": { + "description": "KrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used.", + "$ref": "#/definitions/io.k8s.api.core.v1.ConfigMapKeySelector" + }, + "krbKeytabSecret": { + "description": "KrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos.", + "$ref": "#/definitions/io.k8s.api.core.v1.SecretKeySelector" + }, + "krbRealm": { + "description": "KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used.", + "type": "string" + }, + "krbServicePrincipalName": { + "description": "KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used.", + "type": "string" + }, + "krbUsername": { + "description": "KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used.", + "type": "string" + }, + "path": { + "description": "Path is a file path in HDFS", + "type": "string" + } + } + }, + "io.argoproj.workflow.v1alpha1.HDFSConfig": { + "description": "HDFSConfig is configurations for HDFS", + "required": [ + "addresses" + ], + "properties": { + "addresses": { + "description": "Addresses is accessible addresses of HDFS name nodes", + "type": "array", + "items": { + "type": "string" + } + }, + "hdfsUser": { + "description": "HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used.", + "type": "string" + }, + "krbCCacheSecret": { + "description": "KrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos.", + "$ref": "#/definitions/io.k8s.api.core.v1.SecretKeySelector" + }, + "krbConfigConfigMap": { + "description": "KrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used.", + "$ref": "#/definitions/io.k8s.api.core.v1.ConfigMapKeySelector" + }, + "krbKeytabSecret": { + "description": "KrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos.", + "$ref": "#/definitions/io.k8s.api.core.v1.SecretKeySelector" + }, + "krbRealm": { + "description": "KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used.", + "type": "string" + }, + "krbServicePrincipalName": { + "description": "KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used.", + "type": "string" + }, + "krbUsername": { + "description": "KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used.", + "type": "string" + } + } + }, + "io.argoproj.workflow.v1alpha1.HDFSKrbConfig": { + "description": "HDFSKrbConfig is auth configurations for Kerberos", + "properties": { + "krbCCacheSecret": { + "description": "KrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos.", + "$ref": "#/definitions/io.k8s.api.core.v1.SecretKeySelector" + }, + "krbConfigConfigMap": { + "description": "KrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used.", + "$ref": "#/definitions/io.k8s.api.core.v1.ConfigMapKeySelector" + }, + "krbKeytabSecret": { + "description": "KrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos.", + "$ref": "#/definitions/io.k8s.api.core.v1.SecretKeySelector" + }, + "krbRealm": { + "description": "KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used.", + "type": "string" + }, + "krbServicePrincipalName": { + "description": "KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used.", + "type": "string" + }, + "krbUsername": { + "description": "KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used.", + "type": "string" + } + } + }, "io.argoproj.workflow.v1alpha1.HTTPArtifact": { "description": "HTTPArtifact allows an file served on HTTP to be placed as an input artifact in a container", "required": [ @@ -387,6 +630,10 @@ "description": "Manifest contains the kubernetes manifest", "type": "string" }, + "mergeStrategy": { + "description": "MergeStrategy is the strategy used to merge a patch. It defaults to \"strategic\" Must be one of: strategic, merge, json", + "type": "string" + }, "successCondition": { "description": "SuccessCondition is a label selector expression which describes the conditions of the k8s resource in which it is acceptable to proceed to the following step", "type": "string" @@ -549,7 +796,7 @@ "$ref": "#/definitions/io.k8s.api.core.v1.Probe" }, "resources": { - "description": "Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/", + "description": "Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources", "$ref": "#/definitions/io.k8s.api.core.v1.ResourceRequirements" }, "securityContext": { @@ -625,8 +872,154 @@ } } }, - "io.argoproj.workflow.v1alpha1.Sidecar": { - "description": "Sidecar is a container which runs alongside the main container", + "io.argoproj.workflow.v1alpha1.SuspendTemplate": { + "description": "SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time" + }, + "io.argoproj.workflow.v1alpha1.TarStrategy": { + "description": "TarStrategy will tar and gzip the file or directory when saving" + }, + "io.argoproj.workflow.v1alpha1.Template": { + "description": "Template is a reusable and composable unit of execution in a workflow", + "required": [ + "name" + ], + "properties": { + "activeDeadlineSeconds": { + "description": "Optional duration in seconds relative to the StartTime that the pod may be active on a node before the system actively tries to terminate the pod; value must be positive integer This field is only applicable to container and script templates.", + "type": "integer", + "format": "int64" + }, + "affinity": { + "description": "Affinity sets the pod's scheduling constraints Overrides the affinity set at the workflow level (if any)", + "$ref": "#/definitions/io.k8s.api.core.v1.Affinity" + }, + "archiveLocation": { + "description": "Location in which all files related to the step will be stored (logs, artifacts, etc...). Can be overridden by individual items in Outputs. If omitted, will use the default artifact repository location configured in the controller, appended with the \u003cworkflowname\u003e/\u003cnodename\u003e in the key.", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.ArtifactLocation" + }, + "container": { + "description": "Container is the main container image to run in the pod", + "$ref": "#/definitions/io.k8s.api.core.v1.Container" + }, + "daemon": { + "description": "Deamon will allow a workflow to proceed to the next step so long as the container reaches readiness", + "type": "boolean" + }, + "dag": { + "description": "DAG template subtype which runs a DAG", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.DAGTemplate" + }, + "errors": { + "type": "array", + "items": { + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.ExceptionCondition" + } + }, + "initContainers": { + "description": "InitContainers is a list of containers which run before the main container.", + "type": "array", + "items": { + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.UserContainer" + } + }, + "inputs": { + "description": "Inputs describe what inputs parameters and artifacts are supplied to this template", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.Inputs" + }, + "metadata": { + "description": "Metdata sets the pods's metadata, i.e. annotations and labels", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.Metadata" + }, + "name": { + "description": "Name is the name of the template", + "type": "string" + }, + "nodeSelector": { + "description": "NodeSelector is a selector to schedule this step of the workflow to be run on the selected node(s). Overrides the selector set at the workflow level.", + "type": "object", + "additionalProperties": { + "type": "string" + } + }, + "outputs": { + "description": "Outputs describe the parameters and artifacts that this template produces", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.Outputs" + }, + "parallelism": { + "description": "Parallelism limits the max total parallel pods that can execute at the same time within the boundaries of this template invocation. If additional steps/dag templates are invoked, the pods created by those templates will not be counted towards this total.", + "type": "integer", + "format": "int64" + }, + "priority": { + "description": "Priority to apply to workflow pods.", + "type": "integer", + "format": "int32" + }, + "priorityClassName": { + "description": "PriorityClassName to apply to workflow pods.", + "type": "string" + }, + "resource": { + "description": "Resource template subtype which can run k8s resources", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.ResourceTemplate" + }, + "retryStrategy": { + "description": "RetryStrategy describes how to retry a template when it fails", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.RetryStrategy" + }, + "schedulerName": { + "description": "If specified, the pod will be dispatched by specified scheduler. Or it will be dispatched by workflow scope scheduler if specified. If neither specified, the pod will be dispatched by default scheduler.", + "type": "string" + }, + "script": { + "description": "Script runs a portion of code against an interpreter", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.ScriptTemplate" + }, + "sidecars": { + "description": "Sidecars is a list of containers which run alongside the main container Sidecars are automatically killed when the main container completes", + "type": "array", + "items": { + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.UserContainer" + } + }, + "steps": { + "description": "Steps define a series of sequential/parallel workflow steps", + "type": "array", + "items": { + "type": "array", + "items": { + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.WorkflowStep" + } + } + }, + "suspend": { + "description": "Suspend template subtype which can suspend a workflow when reaching the step", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.SuspendTemplate" + }, + "tolerations": { + "description": "Tolerations to apply to workflow pods.", + "type": "array", + "items": { + "$ref": "#/definitions/io.k8s.api.core.v1.Toleration" + } + }, + "volumes": { + "description": "Volumes is a list of volumes that can be mounted by containers in a template.", + "type": "array", + "items": { + "$ref": "#/definitions/io.k8s.api.core.v1.Volume" + } + }, + "warnings": { + "type": "array", + "items": { + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.ExceptionCondition" + } + } + } + }, + "io.argoproj.workflow.v1alpha1.UserContainer": { + "description": "UserContainer is a container specified by a user.", "required": [ "name" ], @@ -678,7 +1071,7 @@ "$ref": "#/definitions/io.k8s.api.core.v1.Probe" }, "mirrorVolumeMounts": { - "description": "MirrorVolumeMounts will mount the same volumes specified in the main container to the sidecar (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding", + "description": "MirrorVolumeMounts will mount the same volumes specified in the main container to the container (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding", "type": "boolean" }, "name": { @@ -699,7 +1092,7 @@ "$ref": "#/definitions/io.k8s.api.core.v1.Probe" }, "resources": { - "description": "Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/", + "description": "Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources", "$ref": "#/definitions/io.k8s.api.core.v1.ResourceRequirements" }, "securityContext": { @@ -750,113 +1143,6 @@ } } }, - "io.argoproj.workflow.v1alpha1.SuspendTemplate": { - "description": "SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time" - }, - "io.argoproj.workflow.v1alpha1.TarStrategy": { - "description": "TarStrategy will tar and gzip the file or directory when saving" - }, - "io.argoproj.workflow.v1alpha1.Template": { - "description": "Template is a reusable and composable unit of execution in a workflow", - "required": [ - "name" - ], - "properties": { - "activeDeadlineSeconds": { - "description": "Optional duration in seconds relative to the StartTime that the pod may be active on a node before the system actively tries to terminate the pod; value must be positive integer This field is only applicable to container and script templates.", - "type": "integer", - "format": "int64" - }, - "affinity": { - "description": "Affinity sets the pod's scheduling constraints Overrides the affinity set at the workflow level (if any)", - "$ref": "#/definitions/io.k8s.api.core.v1.Affinity" - }, - "archiveLocation": { - "description": "Location in which all files related to the step will be stored (logs, artifacts, etc...). Can be overridden by individual items in Outputs. If omitted, will use the default artifact repository location configured in the controller, appended with the \u003cworkflowname\u003e/\u003cnodename\u003e in the key.", - "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.ArtifactLocation" - }, - "container": { - "description": "Container is the main container image to run in the pod", - "$ref": "#/definitions/io.k8s.api.core.v1.Container" - }, - "daemon": { - "description": "Deamon will allow a workflow to proceed to the next step so long as the container reaches readiness", - "type": "boolean" - }, - "dag": { - "description": "DAG template subtype which runs a DAG", - "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.DAGTemplate" - }, - "inputs": { - "description": "Inputs describe what inputs parameters and artifacts are supplied to this template", - "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.Inputs" - }, - "metadata": { - "description": "Metdata sets the pods's metadata, i.e. annotations and labels", - "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.Metadata" - }, - "name": { - "description": "Name is the name of the template", - "type": "string" - }, - "nodeSelector": { - "description": "NodeSelector is a selector to schedule this step of the workflow to be run on the selected node(s). Overrides the selector set at the workflow level.", - "type": "object", - "additionalProperties": { - "type": "string" - } - }, - "outputs": { - "description": "Outputs describe the parameters and artifacts that this template produces", - "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.Outputs" - }, - "parallelism": { - "description": "Parallelism limits the max total parallel pods that can execute at the same time within the boundaries of this template invocation. If additional steps/dag templates are invoked, the pods created by those templates will not be counted towards this total.", - "type": "integer", - "format": "int64" - }, - "resource": { - "description": "Resource template subtype which can run k8s resources", - "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.ResourceTemplate" - }, - "retryStrategy": { - "description": "RetryStrategy describes how to retry a template when it fails", - "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.RetryStrategy" - }, - "script": { - "description": "Script runs a portion of code against an interpreter", - "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.ScriptTemplate" - }, - "sidecars": { - "description": "Sidecars is a list of containers which run alongside the main container Sidecars are automatically killed when the main container completes", - "type": "array", - "items": { - "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.Sidecar" - } - }, - "steps": { - "description": "Steps define a series of sequential/parallel workflow steps", - "type": "array", - "items": { - "type": "array", - "items": { - "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.WorkflowStep" - } - } - }, - "suspend": { - "description": "Suspend template subtype which can suspend a workflow when reaching the step", - "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.SuspendTemplate" - }, - "tolerations": { - "description": "Tolerations to apply to workflow pods.", - "type": "array", - "items": { - "$ref": "#/definitions/io.k8s.api.core.v1.Toleration" - } - } - } - }, "io.argoproj.workflow.v1alpha1.ValueFrom": { "description": "ValueFrom describes a location in which to obtain the value to a parameter", "properties": { @@ -951,10 +1237,22 @@ "description": "Arguments contain the parameters and artifacts sent to the workflow entrypoint Parameters are referencable globally using the 'workflow' variable prefix. e.g. {{workflow.parameters.myparam}}", "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.Arguments" }, + "dnsConfig": { + "description": "PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy.", + "$ref": "#/definitions/io.k8s.api.core.v1.PodDNSConfig" + }, + "dnsPolicy": { + "description": "Set DNS policy for the pod. Defaults to \"ClusterFirst\". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'.", + "type": "string" + }, "entrypoint": { "description": "Entrypoint is a template reference to the starting point of the workflow", "type": "string" }, + "hostNetwork": { + "description": "Host networking requested for this workflow pod. Default to false.", + "type": "boolean" + }, "imagePullSecrets": { "description": "ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod", "type": "array", @@ -978,11 +1276,24 @@ "type": "integer", "format": "int64" }, + "podPriority": { + "description": "Priority to apply to workflow pods.", + "type": "integer", + "format": "int32" + }, + "podPriorityClassName": { + "description": "PriorityClassName to apply to workflow pods.", + "type": "string" + }, "priority": { "description": "Priority is used if controller is configured to process limited number of workflows in parallel. Workflows with higher priority are processed first.", "type": "integer", "format": "int32" }, + "schedulerName": { + "description": "Set scheduler name for all pods. Will be overridden if container/script template's scheduler name is set. Default scheduler will be used if neither specified.", + "type": "string" + }, "serviceAccountName": { "description": "ServiceAccountName is the name of the ServiceAccount to run all pods of the workflow as.", "type": "string" @@ -1033,6 +1344,10 @@ "description": "Arguments hold arguments to the template", "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.Arguments" }, + "continueOn": { + "description": "ContinueOn makes argo to proceed with the following step even if this step fails. Errors and Failed states can be specified", + "$ref": "#/definitions/io.argoproj.workflow.v1alpha1.ContinueOn" + }, "name": { "description": "Name of the step", "type": "string" diff --git a/cmd/argo/commands/common.go b/cmd/argo/commands/common.go index 364eba970de6..fb46d173da31 100644 --- a/cmd/argo/commands/common.go +++ b/cmd/argo/commands/common.go @@ -7,9 +7,9 @@ import ( "strconv" "strings" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - wfclientset "github.com/argoproj/argo/pkg/client/clientset/versioned" - "github.com/argoproj/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + wfclientset "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned" + "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" "github.com/spf13/cobra" "k8s.io/client-go/kubernetes" "k8s.io/client-go/rest" @@ -46,6 +46,12 @@ const ( FgDefault = 39 ) +//useful icons +var ( + YellowWarning = ansiFormat("⚠", FgYellow) + RedError = ansiFormat("✖", FgRed) +) + func initializeSession() { jobStatusIconMap = map[wfv1.NodePhase]string{ wfv1.NodePending: ansiFormat("◷", FgYellow), diff --git a/cmd/argo/commands/delete.go b/cmd/argo/commands/delete.go index 69544d433c7e..5aa5c398bb49 100644 --- a/cmd/argo/commands/delete.go +++ b/cmd/argo/commands/delete.go @@ -10,7 +10,7 @@ import ( "github.com/spf13/cobra" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "github.com/argoproj/argo/workflow/common" + "github.com/cyrusbiotechnology/argo/workflow/common" ) var ( diff --git a/cmd/argo/commands/get.go b/cmd/argo/commands/get.go index 58b80b2439e6..358273edba9b 100644 --- a/cmd/argo/commands/get.go +++ b/cmd/argo/commands/get.go @@ -9,11 +9,11 @@ import ( "text/tabwriter" "github.com/argoproj/pkg/humanize" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/util" "github.com/ghodss/yaml" "github.com/spf13/cobra" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" ) const onExitSuffix = "onExit" @@ -36,6 +36,10 @@ func NewGetCommand() *cobra.Command { if err != nil { log.Fatal(err) } + err = util.DecompressWorkflow(wf) + if err != nil { + log.Fatal(err) + } printWorkflow(wf, output) }, } @@ -71,7 +75,7 @@ func printWorkflowHelper(wf *wfv1.Workflow, outFmt string) { serviceAccount = "default" } fmt.Printf(fmtStr, "ServiceAccount:", serviceAccount) - fmt.Printf(fmtStr, "Status:", worklowStatus(wf)) + fmt.Printf(fmtStr, "Status:", workflowStatus(wf)) if wf.Status.Message != "" { fmt.Printf(fmtStr, "Message:", wf.Status.Message) } @@ -114,6 +118,28 @@ func printWorkflowHelper(wf *wfv1.Workflow, outFmt string) { } } } + + errorWriter := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0) + + if wf.Status.Errors != nil || wf.Status.Warnings != nil { + + fmt.Printf("\nErrors and Warnings:\n") + fmt.Fprintf(errorWriter, "%s\tPODNAME\tCODE\tMESSAGE\n", ansiFormat("STEP", FgDefault)) + } + + if wf.Status.Errors != nil { + for _, errorResult := range wf.Status.Errors { + fmt.Fprintf(errorWriter, "%s %s\t%s\t%s\t%s\n", RedError, errorResult.StepName, errorResult.PodName, errorResult.Name, errorResult.Message) + } + } + + if wf.Status.Warnings != nil { + for _, warningResult := range wf.Status.Warnings { + fmt.Fprintf(errorWriter, "%s %s\t%s\t%s\t%s\n", YellowWarning, warningResult.StepName, warningResult.PodName, warningResult.Name, warningResult.Message) + } + } + _ = errorWriter.Flush() + printTree := true if wf.Status.Nodes == nil { printTree = false diff --git a/cmd/argo/commands/lint.go b/cmd/argo/commands/lint.go index 1111411128ad..f0bccb643b45 100644 --- a/cmd/argo/commands/lint.go +++ b/cmd/argo/commands/lint.go @@ -7,8 +7,8 @@ import ( log "github.com/sirupsen/logrus" "github.com/spf13/cobra" - cmdutil "github.com/argoproj/argo/util/cmd" - "github.com/argoproj/argo/workflow/validate" + cmdutil "github.com/cyrusbiotechnology/argo/util/cmd" + "github.com/cyrusbiotechnology/argo/workflow/validate" ) func NewLintCommand() *cobra.Command { diff --git a/cmd/argo/commands/list.go b/cmd/argo/commands/list.go index 471a3f356f26..c1a17eb0ec24 100644 --- a/cmd/argo/commands/list.go +++ b/cmd/argo/commands/list.go @@ -17,10 +17,10 @@ import ( "k8s.io/apimachinery/pkg/labels" "k8s.io/apimachinery/pkg/selection" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" - "github.com/argoproj/argo/workflow/common" - "github.com/argoproj/argo/workflow/util" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/common" + "github.com/cyrusbiotechnology/argo/workflow/util" ) type listFlags struct { @@ -119,7 +119,11 @@ func printTable(wfList []wfv1.Workflow, listArgs *listFlags) { if listArgs.allNamespaces { fmt.Fprintf(w, "%s\t", wf.ObjectMeta.Namespace) } - fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%d", wf.ObjectMeta.Name, worklowStatus(&wf), ageStr, durationStr, wf.Spec.Priority) + var priority int + if wf.Spec.Priority != nil { + priority = int(*wf.Spec.Priority) + } + fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%d", wf.ObjectMeta.Name, workflowStatus(&wf), ageStr, durationStr, priority) if listArgs.output == "wide" { pending, running, completed := countPendingRunningCompleted(&wf) fmt.Fprintf(w, "\t%d/%d/%d", pending, running, completed) @@ -134,6 +138,10 @@ func countPendingRunningCompleted(wf *wfv1.Workflow) (int, int, int) { pending := 0 running := 0 completed := 0 + err := util.DecompressWorkflow(wf) + if err != nil { + log.Fatal(err) + } for _, node := range wf.Status.Nodes { tmpl := wf.GetTemplate(node.TemplateName) if tmpl == nil || !tmpl.IsPodType() { @@ -196,7 +204,7 @@ func (f ByFinishedAt) Less(i, j int) bool { } // workflowStatus returns a human readable inferred workflow status based on workflow phase and conditions -func worklowStatus(wf *wfv1.Workflow) wfv1.NodePhase { +func workflowStatus(wf *wfv1.Workflow) wfv1.NodePhase { switch wf.Status.Phase { case wfv1.NodeRunning: if util.IsWorkflowSuspended(wf) { diff --git a/cmd/argo/commands/logs.go b/cmd/argo/commands/logs.go index e17be636efdb..b8d64ef0e725 100644 --- a/cmd/argo/commands/logs.go +++ b/cmd/argo/commands/logs.go @@ -2,6 +2,7 @@ package commands import ( "bufio" + "context" "fmt" "hash/fnv" "math" @@ -11,16 +12,21 @@ import ( "sync" "time" - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - wfclientset "github.com/argoproj/argo/pkg/client/clientset/versioned" - wfinformers "github.com/argoproj/argo/pkg/client/informers/externalversions" - "github.com/argoproj/pkg/errors" log "github.com/sirupsen/logrus" "github.com/spf13/cobra" - "k8s.io/api/core/v1" + v1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/fields" + "k8s.io/apimachinery/pkg/runtime" + pkgwatch "k8s.io/apimachinery/pkg/watch" "k8s.io/client-go/kubernetes" "k8s.io/client-go/tools/cache" + "k8s.io/client-go/tools/watch" + + "github.com/argoproj/pkg/errors" + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + workflowv1 "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/util" ) type logEntry struct { @@ -101,8 +107,8 @@ func (p *logPrinter) PrintWorkflowLogs(workflow string) error { return err } timeByPod := p.printRecentWorkflowLogs(wf) - if p.follow && wf.Status.Phase == v1alpha1.NodeRunning { - p.printLiveWorkflowLogs(wf, timeByPod) + if p.follow { + p.printLiveWorkflowLogs(wf.Name, wfClient, timeByPod) } return nil } @@ -114,7 +120,7 @@ func (p *logPrinter) PrintPodLogs(podName string) error { return err } var logs []logEntry - err = p.getPodLogs("", podName, namespace, p.follow, p.tail, p.sinceSeconds, p.sinceTime, func(entry logEntry) { + err = p.getPodLogs(context.Background(), "", podName, namespace, p.follow, p.tail, p.sinceSeconds, p.sinceTime, func(entry logEntry) { logs = append(logs, entry) }) if err != nil { @@ -129,6 +135,11 @@ func (p *logPrinter) PrintPodLogs(podName string) error { // Prints logs for workflow pod steps and return most recent log timestamp per pod name func (p *logPrinter) printRecentWorkflowLogs(wf *v1alpha1.Workflow) map[string]*time.Time { var podNodes []v1alpha1.NodeStatus + err := util.DecompressWorkflow(wf) + if err != nil { + log.Warn(err) + return nil + } for _, node := range wf.Status.Nodes { if node.Type == v1alpha1.NodeTypePod && node.Phase != v1alpha1.NodeError { podNodes = append(podNodes, node) @@ -144,7 +155,7 @@ func (p *logPrinter) printRecentWorkflowLogs(wf *v1alpha1.Workflow) map[string]* go func() { defer wg.Done() var podLogs []logEntry - err := p.getPodLogs(getDisplayName(node), node.ID, wf.Namespace, false, p.tail, p.sinceSeconds, p.sinceTime, func(entry logEntry) { + err := p.getPodLogs(context.Background(), getDisplayName(node), node.ID, wf.Namespace, false, p.tail, p.sinceSeconds, p.sinceTime, func(entry logEntry) { podLogs = append(podLogs, entry) }) @@ -178,35 +189,19 @@ func (p *logPrinter) printRecentWorkflowLogs(wf *v1alpha1.Workflow) map[string]* return timeByPod } -func (p *logPrinter) setupWorkflowInformer(namespace string, name string, callback func(wf *v1alpha1.Workflow, done bool)) cache.SharedIndexInformer { - wfcClientset := wfclientset.NewForConfigOrDie(restConfig) - wfInformerFactory := wfinformers.NewFilteredSharedInformerFactory(wfcClientset, 20*time.Minute, namespace, nil) - informer := wfInformerFactory.Argoproj().V1alpha1().Workflows().Informer() - informer.AddEventHandler( - cache.ResourceEventHandlerFuncs{ - UpdateFunc: func(old, new interface{}) { - updatedWf := new.(*v1alpha1.Workflow) - if updatedWf.Name == name { - callback(updatedWf, updatedWf.Status.Phase != v1alpha1.NodeRunning) - } - }, - DeleteFunc: func(obj interface{}) { - deletedWf := obj.(*v1alpha1.Workflow) - if deletedWf.Name == name { - callback(deletedWf, true) - } - }, - }, - ) - return informer -} - // Prints live logs for workflow pods, starting from time specified in timeByPod name. -func (p *logPrinter) printLiveWorkflowLogs(workflow *v1alpha1.Workflow, timeByPod map[string]*time.Time) { +func (p *logPrinter) printLiveWorkflowLogs(workflowName string, wfClient workflowv1.WorkflowInterface, timeByPod map[string]*time.Time) { logs := make(chan logEntry) streamedPods := make(map[string]bool) + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() processPods := func(wf *v1alpha1.Workflow) { + err := util.DecompressWorkflow(wf) + if err != nil { + log.Warn(err) + return + } for id := range wf.Status.Nodes { node := wf.Status.Nodes[id] if node.Type == v1alpha1.NodeTypePod && node.Phase != v1alpha1.NodeError && streamedPods[node.ID] == false { @@ -218,7 +213,7 @@ func (p *logPrinter) printLiveWorkflowLogs(workflow *v1alpha1.Workflow, timeByPo sinceTime := metav1.NewTime(podTime.Add(time.Second)) sinceTimePtr = &sinceTime } - err := p.getPodLogs(getDisplayName(node), node.ID, wf.Namespace, true, nil, nil, sinceTimePtr, func(entry logEntry) { + err := p.getPodLogs(ctx, getDisplayName(node), node.ID, wf.Namespace, true, nil, nil, sinceTimePtr, func(entry logEntry) { logs <- entry }) if err != nil { @@ -229,20 +224,31 @@ func (p *logPrinter) printLiveWorkflowLogs(workflow *v1alpha1.Workflow, timeByPo } } - processPods(workflow) - informer := p.setupWorkflowInformer(workflow.Namespace, workflow.Name, func(wf *v1alpha1.Workflow, done bool) { - if done { - close(logs) - } else { - processPods(wf) - } - }) - - stopChannel := make(chan struct{}) go func() { - informer.Run(stopChannel) + defer close(logs) + fieldSelector := fields.ParseSelectorOrDie(fmt.Sprintf("metadata.name=%s", workflowName)) + listOpts := metav1.ListOptions{FieldSelector: fieldSelector.String()} + lw := &cache.ListWatch{ + ListFunc: func(options metav1.ListOptions) (runtime.Object, error) { + return wfClient.List(listOpts) + }, + WatchFunc: func(options metav1.ListOptions) (pkgwatch.Interface, error) { + return wfClient.Watch(listOpts) + }, + } + _, err := watch.UntilWithSync(ctx, lw, &v1alpha1.Workflow{}, nil, func(event pkgwatch.Event) (b bool, e error) { + if wf, ok := event.Object.(*v1alpha1.Workflow); ok { + if !wf.Status.Completed() { + processPods(wf) + } + return wf.Status.Completed(), nil + } + return true, nil + }) + if err != nil { + log.Fatal(err) + } }() - defer close(stopChannel) for entry := range logs { p.printLogEntry(entry) @@ -273,35 +279,56 @@ func (p *logPrinter) printLogEntry(entry logEntry) { fmt.Println(line) } -func (p *logPrinter) ensureContainerStarted(podName string, podNamespace string, container string, retryCnt int, retryTimeout time.Duration) error { - for retryCnt > 0 { - pod, err := p.kubeClient.CoreV1().Pods(podNamespace).Get(podName, metav1.GetOptions{}) +func (p *logPrinter) hasContainerStarted(podName string, podNamespace string, container string) (bool, error) { + pod, err := p.kubeClient.CoreV1().Pods(podNamespace).Get(podName, metav1.GetOptions{}) + if err != nil { + return false, err + } + var containerStatus *v1.ContainerStatus + for _, status := range pod.Status.ContainerStatuses { + if status.Name == container { + containerStatus = &status + break + } + } + if containerStatus == nil { + return false, nil + } + + if containerStatus.State.Waiting != nil { + return false, nil + } + return true, nil +} + +func (p *logPrinter) getPodLogs( + ctx context.Context, + displayName string, + podName string, + podNamespace string, + follow bool, + tail *int64, + sinceSeconds *int64, + sinceTime *metav1.Time, + callback func(entry logEntry)) error { + + for ctx.Err() == nil { + hasStarted, err := p.hasContainerStarted(podName, podNamespace, p.container) + if err != nil { return err } - var containerStatus *v1.ContainerStatus - for _, status := range pod.Status.ContainerStatuses { - if status.Name == container { - containerStatus = &status - break + if !hasStarted { + if follow { + time.Sleep(1 * time.Second) + } else { + return nil } - } - if containerStatus == nil || containerStatus.State.Waiting != nil { - time.Sleep(retryTimeout) - retryCnt-- } else { - return nil + break } } - return fmt.Errorf("container '%s' of pod '%s' has not started within expected timeout", container, podName) -} -func (p *logPrinter) getPodLogs( - displayName string, podName string, podNamespace string, follow bool, tail *int64, sinceSeconds *int64, sinceTime *metav1.Time, callback func(entry logEntry)) error { - err := p.ensureContainerStarted(podName, podNamespace, p.container, 10, time.Second) - if err != nil { - return err - } stream, err := p.kubeClient.CoreV1().Pods(podNamespace).GetLogs(podName, &v1.PodLogOptions{ Container: p.container, Follow: follow, diff --git a/cmd/argo/commands/resubmit.go b/cmd/argo/commands/resubmit.go index 94437b0b61b3..6ad8e1968048 100644 --- a/cmd/argo/commands/resubmit.go +++ b/cmd/argo/commands/resubmit.go @@ -3,8 +3,8 @@ package commands import ( "os" - "github.com/argoproj/argo/workflow/util" "github.com/argoproj/pkg/errors" + "github.com/cyrusbiotechnology/argo/workflow/util" "github.com/spf13/cobra" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) diff --git a/cmd/argo/commands/resume.go b/cmd/argo/commands/resume.go index fcfd84a4f5f7..babe8bb23918 100644 --- a/cmd/argo/commands/resume.go +++ b/cmd/argo/commands/resume.go @@ -5,7 +5,7 @@ import ( "log" "os" - "github.com/argoproj/argo/workflow/util" + "github.com/cyrusbiotechnology/argo/workflow/util" "github.com/spf13/cobra" ) diff --git a/cmd/argo/commands/retry.go b/cmd/argo/commands/retry.go index 43002427d9a2..bc0f8f553fc1 100644 --- a/cmd/argo/commands/retry.go +++ b/cmd/argo/commands/retry.go @@ -7,7 +7,7 @@ import ( "github.com/spf13/cobra" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "github.com/argoproj/argo/workflow/util" + "github.com/cyrusbiotechnology/argo/workflow/util" ) func NewRetryCommand() *cobra.Command { diff --git a/cmd/argo/commands/root.go b/cmd/argo/commands/root.go index 3d01d3200675..84f29d7c826b 100644 --- a/cmd/argo/commands/root.go +++ b/cmd/argo/commands/root.go @@ -3,7 +3,7 @@ package commands import ( "os" - "github.com/argoproj/argo/util/cmd" + "github.com/cyrusbiotechnology/argo/util/cmd" "github.com/spf13/cobra" "k8s.io/client-go/tools/clientcmd" ) diff --git a/cmd/argo/commands/submit.go b/cmd/argo/commands/submit.go index d9cd5c3a58ec..1d9dc5a837e7 100644 --- a/cmd/argo/commands/submit.go +++ b/cmd/argo/commands/submit.go @@ -10,10 +10,10 @@ import ( "github.com/argoproj/pkg/json" "github.com/spf13/cobra" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - cmdutil "github.com/argoproj/argo/util/cmd" - "github.com/argoproj/argo/workflow/common" - "github.com/argoproj/argo/workflow/util" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + cmdutil "github.com/cyrusbiotechnology/argo/util/cmd" + "github.com/cyrusbiotechnology/argo/workflow/common" + "github.com/cyrusbiotechnology/argo/workflow/util" ) // cliSubmitOpts holds submition options specific to CLI submission (e.g. controlling output) @@ -145,7 +145,7 @@ func unmarshalWorkflows(wfBytes []byte, strict bool) []wfv1.Workflow { func waitOrWatch(workflowNames []string, cliSubmitOpts cliSubmitOpts) { if cliSubmitOpts.wait { - WaitWorkflows(workflowNames, false, cliSubmitOpts.output == "json") + WaitWorkflows(workflowNames, false, !(cliSubmitOpts.output == "" || cliSubmitOpts.output == "wide")) } else if cliSubmitOpts.watch { watchWorkflow(workflowNames[0]) } diff --git a/cmd/argo/commands/suspend.go b/cmd/argo/commands/suspend.go index c63142fa404c..de624ab6a68a 100644 --- a/cmd/argo/commands/suspend.go +++ b/cmd/argo/commands/suspend.go @@ -5,7 +5,7 @@ import ( "log" "os" - "github.com/argoproj/argo/workflow/util" + "github.com/cyrusbiotechnology/argo/workflow/util" "github.com/spf13/cobra" ) diff --git a/cmd/argo/commands/terminate.go b/cmd/argo/commands/terminate.go index c05c75f0187a..309692ba6e86 100644 --- a/cmd/argo/commands/terminate.go +++ b/cmd/argo/commands/terminate.go @@ -7,7 +7,7 @@ import ( "github.com/argoproj/pkg/errors" "github.com/spf13/cobra" - "github.com/argoproj/argo/workflow/util" + "github.com/cyrusbiotechnology/argo/workflow/util" ) func NewTerminateCommand() *cobra.Command { diff --git a/cmd/argo/commands/wait.go b/cmd/argo/commands/wait.go index 5fba7decc90a..f5a7c18b5af6 100644 --- a/cmd/argo/commands/wait.go +++ b/cmd/argo/commands/wait.go @@ -5,8 +5,8 @@ import ( "os" "sync" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" "github.com/argoproj/pkg/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" "github.com/spf13/cobra" apierr "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" diff --git a/cmd/argo/commands/watch.go b/cmd/argo/commands/watch.go index 133ee033a5ee..9f07a88ea39c 100644 --- a/cmd/argo/commands/watch.go +++ b/cmd/argo/commands/watch.go @@ -10,7 +10,8 @@ import ( metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/fields" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/util" ) func NewWatchCommand() *cobra.Command { @@ -53,6 +54,8 @@ func watchWorkflow(name string) { errors.CheckError(err) continue } + err := util.DecompressWorkflow(wf) + errors.CheckError(err) print("\033[H\033[2J") print("\033[0;0H") printWorkflowHelper(wf, "") diff --git a/cmd/argo/main.go b/cmd/argo/main.go index 7f45d54b17f8..2af1fc903aca 100644 --- a/cmd/argo/main.go +++ b/cmd/argo/main.go @@ -4,7 +4,7 @@ import ( "fmt" "os" - "github.com/argoproj/argo/cmd/argo/commands" + "github.com/cyrusbiotechnology/argo/cmd/argo/commands" // load the azure plugin (required to authenticate against AKS clusters). _ "k8s.io/client-go/plugin/pkg/client/auth/azure" // load the gcp plugin (required to authenticate against GKE clusters). diff --git a/cmd/argoexec/commands/init.go b/cmd/argoexec/commands/init.go index c581a240496c..270cada7e52f 100644 --- a/cmd/argoexec/commands/init.go +++ b/cmd/argoexec/commands/init.go @@ -6,19 +6,18 @@ import ( "github.com/spf13/cobra" ) -func init() { - RootCmd.AddCommand(initCmd) -} - -var initCmd = &cobra.Command{ - Use: "init", - Short: "Load artifacts", - Run: func(cmd *cobra.Command, args []string) { - err := loadArtifacts() - if err != nil { - log.Fatalf("%+v", err) - } - }, +func NewInitCommand() *cobra.Command { + var command = cobra.Command{ + Use: "init", + Short: "Load artifacts", + Run: func(cmd *cobra.Command, args []string) { + err := loadArtifacts() + if err != nil { + log.Fatalf("%+v", err) + } + }, + } + return &command } func loadArtifacts() error { diff --git a/cmd/argoexec/commands/resource.go b/cmd/argoexec/commands/resource.go index 240b72cb3664..35266c21aedc 100644 --- a/cmd/argoexec/commands/resource.go +++ b/cmd/argoexec/commands/resource.go @@ -1,30 +1,30 @@ package commands import ( + "fmt" "os" - "github.com/argoproj/argo/workflow/common" + "github.com/cyrusbiotechnology/argo/workflow/common" log "github.com/sirupsen/logrus" "github.com/spf13/cobra" ) -func init() { - RootCmd.AddCommand(resourceCmd) -} - -var resourceCmd = &cobra.Command{ - Use: "resource (get|create|apply|delete) MANIFEST", - Short: "update a resource and wait for resource conditions", - Run: func(cmd *cobra.Command, args []string) { - if len(args) != 1 { - cmd.HelpFunc()(cmd, args) - os.Exit(1) - } - err := execResource(args[0]) - if err != nil { - log.Fatalf("%+v", err) - } - }, +func NewResourceCommand() *cobra.Command { + var command = cobra.Command{ + Use: "resource (get|create|apply|delete) MANIFEST", + Short: "update a resource and wait for resource conditions", + Run: func(cmd *cobra.Command, args []string) { + if len(args) != 1 { + cmd.HelpFunc()(cmd, args) + os.Exit(1) + } + err := execResource(args[0]) + if err != nil { + log.Fatalf("%+v", err) + } + }, + } + return &command } func execResource(action string) error { @@ -35,20 +35,28 @@ func execResource(action string) error { wfExecutor.AddError(err) return err } - resourceName, err := wfExecutor.ExecResource(action, common.ExecutorResourceManifestPath) - if err != nil { + isDelete := action == "delete" + if isDelete && (wfExecutor.Template.Resource.SuccessCondition != "" || wfExecutor.Template.Resource.FailureCondition != "" || len(wfExecutor.Template.Outputs.Parameters) > 0) { + err = fmt.Errorf("successCondition, failureCondition and outputs are not supported for delete action") wfExecutor.AddError(err) return err } - err = wfExecutor.WaitResource(resourceName) + resourceNamespace, resourceName, err := wfExecutor.ExecResource(action, common.ExecutorResourceManifestPath, isDelete) if err != nil { wfExecutor.AddError(err) return err } - err = wfExecutor.SaveResourceParameters(resourceName) - if err != nil { - wfExecutor.AddError(err) - return err + if !isDelete { + err = wfExecutor.WaitResource(resourceNamespace, resourceName) + if err != nil { + wfExecutor.AddError(err) + return err + } + err = wfExecutor.SaveResourceParameters(resourceNamespace, resourceName) + if err != nil { + wfExecutor.AddError(err) + return err + } } return nil } diff --git a/cmd/argoexec/commands/root.go b/cmd/argoexec/commands/root.go index c53dadcd2e1d..af81cfe6f073 100644 --- a/cmd/argoexec/commands/root.go +++ b/cmd/argoexec/commands/root.go @@ -1,22 +1,24 @@ package commands import ( + "encoding/json" "os" - "github.com/argoproj/pkg/kube/cli" - "github.com/ghodss/yaml" + "github.com/argoproj/pkg/cli" + kubecli "github.com/argoproj/pkg/kube/cli" log "github.com/sirupsen/logrus" "github.com/spf13/cobra" "k8s.io/client-go/kubernetes" "k8s.io/client-go/tools/clientcmd" - "github.com/argoproj/argo" - "github.com/argoproj/argo/util/cmd" - "github.com/argoproj/argo/workflow/common" - "github.com/argoproj/argo/workflow/executor" - "github.com/argoproj/argo/workflow/executor/docker" - "github.com/argoproj/argo/workflow/executor/k8sapi" - "github.com/argoproj/argo/workflow/executor/kubelet" + "github.com/cyrusbiotechnology/argo" + "github.com/cyrusbiotechnology/argo/util/cmd" + "github.com/cyrusbiotechnology/argo/workflow/common" + "github.com/cyrusbiotechnology/argo/workflow/executor" + "github.com/cyrusbiotechnology/argo/workflow/executor/docker" + "github.com/cyrusbiotechnology/argo/workflow/executor/k8sapi" + "github.com/cyrusbiotechnology/argo/workflow/executor/kubelet" + "github.com/cyrusbiotechnology/argo/workflow/executor/pns" ) const ( @@ -25,83 +27,84 @@ const ( ) var ( - // GlobalArgs hold global CLI flags - GlobalArgs globalFlags - - clientConfig clientcmd.ClientConfig -) - -type globalFlags struct { + clientConfig clientcmd.ClientConfig + logLevel string // --loglevel + glogLevel int // --gloglevel podAnnotationsPath string // --pod-annotations -} +) func init() { - clientConfig = cli.AddKubectlFlagsToCmd(RootCmd) - RootCmd.PersistentFlags().StringVar(&GlobalArgs.podAnnotationsPath, "pod-annotations", common.PodMetadataAnnotationsPath, "Pod annotations file from k8s downward API") - RootCmd.AddCommand(cmd.NewVersionCmd(CLIName)) + cobra.OnInitialize(initConfig) } -// RootCmd is the argo root level command -var RootCmd = &cobra.Command{ - Use: CLIName, - Short: "argoexec is the executor sidecar to workflow containers", - Run: func(cmd *cobra.Command, args []string) { - cmd.HelpFunc()(cmd, args) - }, +func initConfig() { + cli.SetLogLevel(logLevel) + cli.SetGLogLevel(glogLevel) } -func initExecutor() *executor.WorkflowExecutor { - podAnnotationsPath := common.PodMetadataAnnotationsPath - - // Use the path specified from the flag - if GlobalArgs.podAnnotationsPath != "" { - podAnnotationsPath = GlobalArgs.podAnnotationsPath +func NewRootCommand() *cobra.Command { + var command = cobra.Command{ + Use: CLIName, + Short: "argoexec is the executor sidecar to workflow containers", + Run: func(cmd *cobra.Command, args []string) { + cmd.HelpFunc()(cmd, args) + }, } + command.AddCommand(NewInitCommand()) + command.AddCommand(NewResourceCommand()) + command.AddCommand(NewWaitCommand()) + command.AddCommand(cmd.NewVersionCmd(CLIName)) + + clientConfig = kubecli.AddKubectlFlagsToCmd(&command) + command.PersistentFlags().StringVar(&podAnnotationsPath, "pod-annotations", common.PodMetadataAnnotationsPath, "Pod annotations file from k8s downward API") + command.PersistentFlags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error") + command.PersistentFlags().IntVar(&glogLevel, "gloglevel", 0, "Set the glog logging level") + + return &command +} + +func initExecutor() *executor.WorkflowExecutor { config, err := clientConfig.ClientConfig() - if err != nil { - panic(err.Error()) - } + checkErr(err) + namespace, _, err := clientConfig.Namespace() - if err != nil { - panic(err.Error()) - } + checkErr(err) clientset, err := kubernetes.NewForConfig(config) - if err != nil { - panic(err.Error()) - } + checkErr(err) + podName, ok := os.LookupEnv(common.EnvVarPodName) if !ok { log.Fatalf("Unable to determine pod name from environment variable %s", common.EnvVarPodName) } + tmpl, err := executor.LoadTemplate(podAnnotationsPath) + checkErr(err) + var cre executor.ContainerRuntimeExecutor switch os.Getenv(common.EnvVarContainerRuntimeExecutor) { case common.ContainerRuntimeExecutorK8sAPI: cre, err = k8sapi.NewK8sAPIExecutor(clientset, config, podName, namespace) - if err != nil { - panic(err.Error()) - } case common.ContainerRuntimeExecutorKubelet: cre, err = kubelet.NewKubeletExecutor() - if err != nil { - panic(err.Error()) - } + case common.ContainerRuntimeExecutorPNS: + cre, err = pns.NewPNSExecutor(clientset, podName, namespace, tmpl.Outputs.HasOutputs()) default: cre, err = docker.NewDockerExecutor() - if err != nil { - panic(err.Error()) - } - } - wfExecutor := executor.NewExecutor(clientset, podName, namespace, podAnnotationsPath, cre) - err = wfExecutor.LoadTemplate() - if err != nil { - panic(err.Error()) } + checkErr(err) - yamlBytes, _ := yaml.Marshal(&wfExecutor.Template) + wfExecutor := executor.NewExecutor(clientset, podName, namespace, podAnnotationsPath, cre, *tmpl) + yamlBytes, _ := json.Marshal(&wfExecutor.Template) vers := argo.GetVersion() - log.Infof("Executor (version: %s, build_date: %s) initialized with template:\n%s", vers, vers.BuildDate, string(yamlBytes)) + log.Infof("Executor (version: %s, build_date: %s) initialized (pod: %s/%s) with template:\n%s", vers, vers.BuildDate, namespace, podName, string(yamlBytes)) return &wfExecutor } + +// checkErr is a convenience function to panic upon error +func checkErr(err error) { + if err != nil { + panic(err.Error()) + } +} diff --git a/cmd/argoexec/commands/wait.go b/cmd/argoexec/commands/wait.go index 04eb7091fc26..b49aa7edd7f2 100644 --- a/cmd/argoexec/commands/wait.go +++ b/cmd/argoexec/commands/wait.go @@ -1,6 +1,7 @@ package commands import ( + "github.com/cyrusbiotechnology/argo/workflow/executor" "time" "github.com/argoproj/pkg/stats" @@ -8,19 +9,18 @@ import ( "github.com/spf13/cobra" ) -func init() { - RootCmd.AddCommand(waitCmd) -} - -var waitCmd = &cobra.Command{ - Use: "wait", - Short: "wait for main container to finish and save artifacts", - Run: func(cmd *cobra.Command, args []string) { - err := waitContainer() - if err != nil { - log.Fatalf("%+v", err) - } - }, +func NewWaitCommand() *cobra.Command { + var command = cobra.Command{ + Use: "wait", + Short: "wait for main container to finish and save artifacts", + Run: func(cmd *cobra.Command, args []string) { + err := waitContainer() + if err != nil { + log.Fatalf("%+v", err) + } + }, + } + return &command } func waitContainer() error { @@ -29,18 +29,18 @@ func waitContainer() error { defer stats.LogStats() stats.StartStatsTicker(5 * time.Minute) - // Wait for main container to complete and kill sidecars + // Wait for main container to complete err := wfExecutor.Wait() if err != nil { wfExecutor.AddError(err) - // do not return here so we can still try to save outputs + // do not return here so we can still try to kill sidecars & save outputs } - logArt, err := wfExecutor.SaveLogs() + err = wfExecutor.KillSidecars() if err != nil { wfExecutor.AddError(err) - return err + // do not return here so we can still try save outputs } - err = wfExecutor.SaveArtifacts() + logArt, err := wfExecutor.SaveLogs() if err != nil { wfExecutor.AddError(err) return err @@ -51,6 +51,12 @@ func waitContainer() error { wfExecutor.AddError(err) return err } + // Saving output artifacts + err = wfExecutor.SaveArtifacts() + if err != nil { + wfExecutor.AddError(err) + return err + } // Capture output script result err = wfExecutor.CaptureScriptResult() if err != nil { @@ -62,5 +68,18 @@ func waitContainer() error { wfExecutor.AddError(err) return err } + + err = wfExecutor.EvaluateConditions(executor.ConditionTypeError) + if err != nil { + wfExecutor.AddError(err) + return err + } + + err = wfExecutor.EvaluateConditions(executor.ConditionTypeWarning) + if err != nil { + wfExecutor.AddError(err) + return err + } + return nil } diff --git a/cmd/argoexec/main.go b/cmd/argoexec/main.go index 629e1b0806fd..5a73e18c68d7 100644 --- a/cmd/argoexec/main.go +++ b/cmd/argoexec/main.go @@ -4,7 +4,7 @@ import ( "fmt" "os" - "github.com/argoproj/argo/cmd/argoexec/commands" + "github.com/cyrusbiotechnology/argo/cmd/argoexec/commands" // load the azure plugin (required to authenticate against AKS clusters). _ "k8s.io/client-go/plugin/pkg/client/auth/azure" // load the gcp plugin (required to authenticate against GKE clusters). @@ -14,7 +14,7 @@ import ( ) func main() { - if err := commands.RootCmd.Execute(); err != nil { + if err := commands.NewRootCommand().Execute(); err != nil { fmt.Println(err) os.Exit(1) } diff --git a/cmd/workflow-controller/main.go b/cmd/workflow-controller/main.go index f881739dbf23..ab972505951d 100644 --- a/cmd/workflow-controller/main.go +++ b/cmd/workflow-controller/main.go @@ -2,13 +2,12 @@ package main import ( "context" - "flag" "fmt" "os" - "strconv" "time" - "github.com/argoproj/pkg/kube/cli" + "github.com/argoproj/pkg/cli" + kubecli "github.com/argoproj/pkg/kube/cli" "github.com/argoproj/pkg/stats" "github.com/spf13/cobra" "k8s.io/client-go/kubernetes" @@ -17,9 +16,9 @@ import ( _ "k8s.io/client-go/plugin/pkg/client/auth/oidc" "k8s.io/client-go/tools/clientcmd" - wfclientset "github.com/argoproj/argo/pkg/client/clientset/versioned" - cmdutil "github.com/argoproj/argo/util/cmd" - "github.com/argoproj/argo/workflow/controller" + wfclientset "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned" + cmdutil "github.com/cyrusbiotechnology/argo/util/cmd" + "github.com/cyrusbiotechnology/argo/workflow/controller" ) const ( @@ -32,6 +31,7 @@ func NewRootCommand() *cobra.Command { var ( clientConfig clientcmd.ClientConfig configMap string // --configmap + configFile string // --config-file executorImage string // --executor-image executorImagePullPolicy string // --executor-image-pull-policy logLevel string // --loglevel @@ -44,20 +44,16 @@ func NewRootCommand() *cobra.Command { Use: CLIName, Short: "workflow-controller is the controller to operate on workflows", RunE: func(c *cobra.Command, args []string) error { - - cmdutil.SetLogLevel(logLevel) + cli.SetLogLevel(logLevel) + cli.SetGLogLevel(glogLevel) stats.RegisterStackDumper() stats.StartStatsTicker(5 * time.Minute) - // Set the glog level for the k8s go-client - _ = flag.CommandLine.Parse([]string{}) - _ = flag.Lookup("logtostderr").Value.Set("true") - _ = flag.Lookup("v").Value.Set(strconv.Itoa(glogLevel)) - config, err := clientConfig.ClientConfig() if err != nil { return err } + config.Burst = 30 config.QPS = 20.0 @@ -70,7 +66,7 @@ func NewRootCommand() *cobra.Command { wflientset := wfclientset.NewForConfigOrDie(config) // start a controller on instances of our custom resource - wfController := controller.NewWorkflowController(config, kubeclientset, wflientset, namespace, executorImage, executorImagePullPolicy, configMap) + wfController := controller.NewWorkflowController(config, kubeclientset, wflientset, namespace, executorImage, executorImagePullPolicy, configMap, configFile) err = wfController.ResyncConfig() if err != nil { return err @@ -89,9 +85,10 @@ func NewRootCommand() *cobra.Command { }, } - clientConfig = cli.AddKubectlFlagsToCmd(&command) + clientConfig = kubecli.AddKubectlFlagsToCmd(&command) command.AddCommand(cmdutil.NewVersionCmd(CLIName)) command.Flags().StringVar(&configMap, "configmap", "workflow-controller-configmap", "Name of K8s configmap to retrieve workflow controller configuration") + command.Flags().StringVar(&configFile, "config-file", "", "Path to a yaml config file. Cannot be specified at the same time as --configmap") command.Flags().StringVar(&executorImage, "executor-image", "", "Executor image to use (overrides value in configmap)") command.Flags().StringVar(&executorImagePullPolicy, "executor-image-pull-policy", "", "Executor imagePullPolicy to use (overrides value in configmap)") command.Flags().StringVar(&logLevel, "loglevel", "info", "Set the logging level. One of: debug|info|warn|error") diff --git a/community/Argo Individual CLA.pdf b/community/Argo Individual CLA.pdf index f91c4a5a3048e0671f00c91822fa4054a586cf50..e25d08bc473881d9a80b6fcd9feb5f4defa96421 100644 GIT binary patch delta 25818 zcmZ^}V{jmE@U|OgV<#JD!;Nijw6SeF8ygebwrwXH+qP}nIQ#p*=bWndU~1~>>1Xcg z>go?uHTN^~eh@`V;3#Br!lJZ{bgZysT~B$Ru#8Nx$aoA4%uH-V^kPJsMC>f=M2xIV zMA}63vP6vk7Db3y*qAukiRfjBSc&MBh*&u}IXPmr+`(ZPnPXQ7K{?`m-mv14L4N^^ zOhin~|2-)F$HeBpO9mnZB6 zkD6W{+!f&BlJ#BYJ((h&Hl4GQ?# zL4_=XlLl0Q!I0P?tKy3lDErMnFfuT{UO`ft=#gfEfXJIeDj8Kw!fD zPtP_67O*@#|N7N8HFh9kVvipKBLroQ-`OQHRwQC$CZd-z{$uKB_HS%Ne0;DDj`qg- z*08P_hI)E>V0laMGcclnOu<=9s1o2_uu%PVmiQqr~u@jt}kK=!a$;U@T zFKYA0k?6nvME}kH|D3TXGZ7Q(|E2g(r08hxWaucTZ*Tm^(Se8|UVjoRUVfMoU}528 z`ro>~&|Zt$Z$k0Pu0B2jNg608CHl6F@9~kL=j$neiu-LF#O{+Yq*aqy`p?<@d~6u8 zDX&gH23CG80b#d3%$$j>V zI{Wi>rqkX*$2;(Km^^Xveq_U+UG;VIRxP{rwVv($v2~E1p9!4h^8%O2&|MtNgJ<+W z<`>Pw$7(_NF?!iU zw$6`}F&o)Q*}Jcom##0jSBEXlujIL8fFFBH9XIIn9cVoBN2Wsxnj8mAvq0&)-UzGc zQ~pGd%WQ6J)*Opp(a8$3uORpFjOh??BqtCA|ER#anuD%E|JpkUy@FzYeVJ>9Ch;WG z$-dW)@XiVnXY4x|k<~>cARsh6YEQ?_fS;WOf>j4o?zBSZ#KhEs1WStDX3%oD?gZ;)~`n7TC;e(=&83OR%REriZG5<(sDa?X+%u zwCAXiFj=fK|0bUfuqux?br#w>5%^>(olT@y*S;lN%V~mFy=vh$>a6OP#E*xOX#@wu zAamWJoSERtD}LV|z}(>bc>8J*znV+#;?vYVpwaV|>#LcJbA?yLf(Cx<}M9F7ssWwj09maPs~2XB$v6 zzLv=DtwEk!pD%7QKa1c^K+TNsgVQ~0Nmp$$ATjUvH3;Sog%TSL35`vJA)yy?Lhceg z(0P7fs6*c!p-&0o) zsdx40?gldk!{T%oY52JT`+5aE4UF`kP&CAm-@U%s@BKx`hjM|wybKQ#p?S)DBl!KX zv(lpzg<^oYY|2t{Pg9dKw1bRLl0Cf+VA1Xvo$K-Ew(bsqi5mQMuYfN#^&ruXqN8*Z zvxLfN{ZUf5Yky7KWpy262Ysez|_m z;RH{{vw%o5jP6o|{)KpFO`cH=1rt?)WUl*Cetw^@gMl-9t4GPpV1&&2er=Bdz~wjh z@mkk{UFtO=rv(?Tg-8rtCtlL7_{KHa?xcJ_hkt{3%q#F+$&F5;OkYyVM_AxQt zQu_NN|B+Nt)>yOR9w!j+M6AUc%yXKdE7n{!`zCr4;P@Ki8W4Ydmc%E9R%^37#` z?8Doju3Tctp^#{@SwV&qV@a~s1`_r$|BTr<86-XWD$|-dJdn-gu4&<*cPpUeO5=trY&;CCCCI}d(7z|_q(Bsi- zSLzb*z+Dbmafe*wJyqH&AY@{Yy;(?1^4Bs%a5eM)9pUu?D^Su4*Vt;Nc1X`&P}3CdiS zgPV9TDAOF>_|(kOh0}=?ip)qXSUni3{l2pw_n$}E2BreKkwZf&04=WLmip0AGTO2V z{;yV5tf{n4AQd|BSg}Q=Ju|(q#Rq2BkKu0%1e&3_#ia8sKbf= zHbihgXywnSZeblw;xD;P+^9y6oqU<_I`_V)g@jUcd*9`I%4@15Cp7bsf?QjLx{LZ8 z2w$OZouMx}0ZrmR0o6Hu|HkdSx6ilRZ!cOHE>W53uWssOdewZF4dopw!(}wLwlO<0 z@(?3@W(CXen5zaJY`df0F zePCbsT#KtLC8C6uWU&y?mSi}>+RQf_`o37O#cJcDxrw4Ygj|s=UAF|fXI-T^PDV^GKe0igEPT1M| zdM-p_H*nU(0RGzjLH3WaR{H%WTi{LE;2%ypOWFmWx6RKw20^$z&INDPfPps2`aMg+ z^Nw^0A#0k9-tNInv!F%jEd=Z?M#lQ|b;Si(9&xM{io1v<08!r7R5M(n8`x%)*n(4kQMxgz{C;8VNew%(Uo=>l<)@blI z!K-bE9l>Nxlu)D|%z{(%8nV1G!MyjMwqBc%^p7tECXbwWyJBrvm_t10D599-xMd4? zDu{On2@XOhZZWt+qyD%m=3$NO_&F$Yo{ZfvKC6ST*6mJLm4U~H*qqwPnKnb;xA`ML zgTI64_ybT7{52aXetC_}h%>N?%vKVlRz`n%VkYyDX2;upAww6Jz)9SxIJ)Si!<3yj zSDr!08qXg1sjgKVn<}hl3OEA6qMcqo(nVjjaOT#wf}jcP2r*Q7o~$TVX-!_py1qHq zZUef!2*K=rr&Oa=I`ufu&K2XI3!^lxKFNYjF+imMJk&*=e8?l!enDiZ3=X{E98bIK z^dFuYt5A}bpH$gP`MHrpX|c9(9lSQpNorKqdW9J9V`wx;lN4El;71cdwQ~+e2Y2ix zq0}Au*Mms3lg-UthlbyoIjS-Ge!^YGh}jY*eJ@1K=bFgwA_bGo(eavpv8Za^vnIL= zzW|CK{#048@DZQQqoDX$gxCt@WT&|tbqb@ry(8@$3TV1Rq%&4G!%WeHDq#f-Rrmzp zbZN#<(fm1EBhnHj@zk>)uoZ>dxyw;N`ien>Tt0-w9jZ4Yw%Pj`a=W{MXJ(}p8?U^k zEgz}IRzE@;&PHmi0&%odok>li8qScZ!vgera$jcV>S_})J%Hng{K^u@gX)?SNAVNB z*Av1+q#cHnuQ4$J+2jQ4{1q&6vp{<=+B&9ENKv6qj!Ro=tjR+c@O!&$T^M#HUPK%f zk&Pl8HM0}f(q5T~^1i68XdmVRf-}Gw53;XZdnt#P^DYq8`pQ{sNiyj-Y8n`d>;O=V zv1PlT_(W9E`fdEUZ3l;zV6}EH*m-!v7W~XyC!P^oV8WJE2}(*7p6+4usk$7nw(JB-?C+!fS!J7CU}P zR$h`$@+kN>tfs}urslYdq;9#HZ4_WqZj`gbb)sEY=MAxD>=0lga)dx;?$Z}=W>L6o zkDX8!d1}zb<%3P+i+%ONq7nRk@`v4^2ULmp6AuVGvc`|U$e&HYW=keYQ3}K`e+j+( zDf1N6h+(DZ_i$G7u&w-^FB=j*oJRAI`c)xrZvH@-QJ66veQ*oTa%6*Ln8pG5g$2VZ zWY-5m>_I11?ok&E8#jq3pCLjNILjqaF~o$65wBJxxcSH3C?gA}Gd@;93%i1x?GFh^ zGV)TP`NaY+SzIyDaRb<71pc-6tDPgYW0i4}1cB=enp3HX{4=B7&gZkJflu?}Y#7C* z;r^Us*aOb(=)2_$IB3qeVJoYH$tg z7pdc=Z+1&iKzG^fZ>Aw|`BqoY&(P;c^sQglueRy;-mnvg7%bVw`Ol|U##8)y|&{e=zOO=a>EeR3Qqc!j@ng;Yv+Eu+f*G%Zfmr*+| zGMON4I7WU(Uj!7w`_-W1c!w5Q**!l;ISoP(#hgZ{-nZOLFnCu_ zs~6gK+5w&w%;cP0h!8PcIe{cY+c!PtO6&!X&p&ii+edm|g&uMXm%0jd22~sDV<3jT zVegT(f8i$44u@w&US2!K#d}wA_yU5wP7EsHS0KjrG6V2EOberPb8b=!nY)HhclZjH zu)VF9pUhyu2!uKl1AK7z?M1_N>sr;FhmqJ?yhE94?_1O3TTZ4ocuyDIG;g@U7> zf|D9DYlP+vu7qV$mb;Io)eT$?U}Q|;6A4z*f7DT`a5L+{KH$!u0xo+?%#OQl8$d|D zCAuzkD#IKsMcFL4-c|bjaPvH5;%kkbOcO=(;K>2x;bWBI5eHe$*i(L*B%ir8jSTd3 zt{j1aHVHyRL1;-7$lU7qX>!}R_$~Xf2F@@+6k1!qmBd+<`#WrZn{sCnXvkZy>aQh! z;nsKtT`&jlAQms6>{T4rj90O(`dRa|-78|MIZaX^ByH32U27@bHps+~d(%DN%fYQz z2>lX}K+2j>Zv?F ze>_9nLQ6|Ga4y*qWQ6apEz1ZD9;+?|k52)Fojva9H*q%Z$BvS6V2<$tfqi^D-BW*r zcJ4t=1HJ1!h%1wSz--Xhd4)H`z>&pOrc(MnmI#^uFb8Ge8cH`K2v=~cDbsG)Nm2$1 zKd>I&ueNEwp_rKksz|CY3@T0Z2_1)o@P=AD69eMT(TQ(%WIDFT+_=^4dff6jhFU-- z8k3&i3`79aHxs@2A=BwRr%?+kakC-{A%vmK^6y#cI2n)E&dc@#P})YMCKCO%Kj4;A z+JgI~VwdR9@&eXr_U(Sk_n?m4kVHp&dM4aIN7oe1W)N6_!_=tdEb7#h6PpdZV@Ou^ zl?Feb-fq{nyxxL#`twM3Mb=h*f7k}-dT9;0<&p9Q^sz+zB%}zi4}|(!g-Xhi(4N`Q zTvNTouLMhy+eE49f4n04ameKb27-baC_?1k6Kk{gv)0(HgsrU4_i{>zag6D)#7*xo zmS}83Rr*K;BrS`jfE#3LJ)txiw`g}a56Cb-$oQ86Fc+&?nPa6z^NrHJo0xwdOwX5Fh292<{Ymjz75E&o4i@jtN0bF)-h%sdT(|xP`4~izp@XHG!n$jqIlK~S-#T_Dry}x-u6s^H2hi-AS`#;5# z^{i0z!q{Sjz7Kve`}V=_TAIpqVVi7%nAPW#1gRu!h^&ycF(RclP!tO zs>(UE5GU%Hl=p?%jn|FSJrEL|AnTiff6YI{XD~Sl-GseP z61_jaq<}F&ce^tPO`eZvfV?g9dxp`9s7r1pY)Ve|Zz(IH>PK+U_olxIg(P}!E~^*dDQFeI|3ZXR^y!Td&`Qj zMp{jE$J|EKEc|Wi=jW_W{OiJ3VfFuB=D39RYuFsj@veGnP*|!gYJMAQ8P%=;^f`h#C=fy;t1K&aNN$K!wb(QaePaKPK7)(VE87TKe(E|6aN zp0b6c#@$fL0LFUgbM5mBZJSW7X&P)=SMw3tKV_=YUV1sI4&8^K_qZCA1{xktQqLuj z*1)OB4c#0VSb@5RrHE|iFpnrH7SChnM4^CwJyM@ie7od21|snb8;6&4tve`Xxcxd% znx3IK2tgG1uOrx;=Bp0578Sk)pEi;tv64J#>%rOTR~aDx!0UQ=!}&l`qnbaCMB ze^yLFpU?O-igrLnH_jj7Hh?}C$$2@K%<1Ew&MgA9ble7RO~W7BnY3!f*M`<+pY}k4 zN5(qI0UyOwCo=GRUG&rW93N19yC!HI)>;w*rSvg6UN*lZ;cjT%MpDbyRo{ZUtdWMj zoZf`Ltl7k0H`7G-+-Rk30lgzwwJ%!4^+?r{&8=4&6weCJb(lw$g&Y4kCm4WxJH^Ya zegr`4=z3n;{BCoetlg?col2~`TZMRXZ335tMRu7rptsu0H6y(llTAi>m$+I?A#NO>C}$lT*%!BI+#TvbP`w5eIy; z6RzvI2J+Pgj%{W7{eq_roz<7PMw}p@(8p;twyhLa43EWxrP&A`p>@jASV5f1>w#R= zUy%=X=&WqfD(dc#fbzqaFhfaZHx5_W1DUUW1^YuIUZF>Al1l0Gch^WQ{#E%?TEG(G zZcO1RG{5BLcjpF$s81Uau* zC=&>#{l#aNp}u5ON!l$S>9#H*2FAENRxCH z`Qp3ulDGWP6he|MPX6=jHr`hllL+GeS*gIhoG&H>K0*ur!#7Szd~KtVA>K2lV@BWT zASsH`hAPlZJWs(}O584pH!)<%pUNYRpa+)U3Z{@6Z(f}HA2sIv@39;;0idv&);o?o(b6#8Zp+3|ae8=J1ACW!Jo$$H?I}zTDa- zBJ0emF4_=4uC0kIF-d$o6lg&spDlfVniI3bc-%qnN6Votl7d;R+HVYuO$LIz|57-a z?Rg!^>=gb2_NNFz*CZ*af}l9oCK(fuw)wt5@htqELTfwY_FvxlA8xuJ_g%2{a>D=& z1Xb31j9r_287m6yK^!>%_ZSSugj;ps&bz{cS@u}B@RfTXsgX$oNEe!n`p8ig`_{;P z_z=B1-PTAOa7qNxq_i#qvdi!cioeEz^hwo2Wl1C7SRIRO(Y*XYJDe9AGN+`cz`B@g z9a&%x(~f@8CbsBS8<6CTd8w=}>Lnpy#fyq86z9_!h>FY|2eNra)(~ph6hUnzp$>aM zp>%|QNAiJ6?|%gctf$tO=>A?;*gEu{sX8>peb}5$@)$|Rchy)DK3nb9c%jYwq6b}^ zO!rv=ss8-3mb_%Lj}A$A>-T7}fv{Ee`hJySGoz6gUmk*^OKQ=I_lCS#mYyZN#WQ`= z{RJhQKLs8!SAwFv7+_)LO;`ghNA^0WeArP(zlt4nJ`!03@CE;%KakmOn&ViFv1tdj zr9XB7DXAOW=xzr6`t}L3Z|{?wX9IaH*1!y^IOC0V5ekCn&a?du#G7pgXD$K$k#Km*z+rd|~?XmVQy44`4OD8*7 zgWJ=~hD!!AAj$`@?m%}p8#rB)UM8?PMGtDLZ2Ud=nCU<5?9knyNPS7o>%G0CxvKsj zbmRJ{YCQH55^57|-4*Ko^qL3&wfH4h%xUHeJ4HTY-tZ2u;telhXct9m$X>6x;!o3! zSW7guQ+>n!JcKjHQW=j~OuInUh}#I==pr1D0+7+6B!@rj%GhPQs?CSt?+V%V*KO3T z-8LRTTLv_CE&Rc^$u!h9qHWSK*S7d5f1o=fSsbd=thCX5(gg4u?5C^U**lu~#G)7W z9(LTryqoxRvU6nT=bWlLr2t_-3LqXtontPNm9moZ9KTMfNEtKu6TN~yW>{IfVc^jZ zJVsq$Sw`dIq$7KHS5PnF5A6}}8Q|@2M}7NUke^Jl%Ghm}L^8uT%h)5ouXfq8SvzW% zZ`iT-YX~h0XUHKcGb$P1hVzk&&+(naTdnADv_aO$W_GA_;8sS4>qXl^v4-+{LbjY@;=ixzwb92Kw2<}k zkaf^V4PqBwVwd>G8oj*o6k0m8vwr+tf}0puiTP0QU24hhy1u$8`}GK|hz0WrUhDlR zBn`e2iRfvMQ~B1YVFaOlg$Fy6{VIWd0?~c8=J56PlyJkvM&T-M;w5(H80^36z%m#H z=Lb7+@(Su-rzonS!=&Mk&|!Rm{lRA8Nqk^HRZ1iZA1e|*7I%>YGuDAk#yn${qwD5$ zaC9&oCq0dhn-e8gla2%{;YzU?+T(`(g^rffHKZ#NV+H}kjk;VfTuogOW0gR?egbBq zFD;l23G@Z4H4Re{1!ESHrS@bGC=!x>OWU-tFniUU(-~QDbeYqZmSy1$xq6TuORKWc znAtbyNO;cU5ps0RPxv4`y-p0S?IV=oJm>Lz+^I*D=1s6O>&>QR-fu2Lma${$hVo zH**2V^nB?Hv_0CjGpFhT8lWGO$!v<6pSUFKptemRsRmKQkdv0&VNU|RZ|Qzpuc)c0 z{do1PcTexfMA-B}4b!F{bf(WXQCibgr89kwyp{8G`P|2Z_o}U-i_>2B9K3$if@Hn! z_?5+b_crIVF0)kbp@=ihj{Md|S0~)Bz65y`$4>5vSp__!#ZF3qfK6PwPColFzI*s) z5%0aMhQQUV&gyU38NpYq4Z*9qtz+K#Qxq~F^y z#JQ~#!$(@~$Cj7!NpeS?C++Fr!xiB3LE86k$vk@(CFnC; z`Ia=gx4iOBbGd!v$w1f}e5=pnTfJLO8=G|bu6dzx!DU_O?BArJS-w$Gx%6l0a`Ccn z^_SKuLkKsP z^o+Di^U!9YOzmc@&}H>;b|{{sPK9F$ceu*8lRAed4^Zza-=w)}xBu8gw+U??Ah?NI zrhWuMT?V%LJ$Ainbv=nbqZChtZelrpB8El}=--5zA!mn8^&joR-4s3tKNmks;GyV- z=kMa$Nxe&YQb&aY;aB}nyU;fzZy8=cyixhXb^EJ#1I!96%61a1NO`5FrvJ3#FU+#J zr>abc-#SoRC$aq=n>Vq%Yj{}=)M#Q;9m@j!F~Wa zDLbMv?R`!DzogYFnrdWrGD%t;FECd+eoyIiya|guT`U3J-M^f9$=NPxe!p-35DF$E zB}b5z!H#u*9G1@?ioj?8%z!zP$=H`l=ON=YJas#geVe<7Ga1iFyf<;4fuT@)8($WuTx*MlX>2|j4mXWD`z0fbwXg9sDVk~KQ-Bz<# zXlVaE9BgFYXunL-T-tJe!*|(qdg6Wh({l9*T}I$}yz`r2<9WGgh}~u;%k{Rj()ToN z)moPS&T8ZF$oYKz;T`UQzO&)&Kvl=<;(>kB^X!86>U`t()#pXW(-Uz01WpKeJ6=x@ z1@m({xbXb{obOYYQ8L92v;7Vf8O+}6%Z&M=S(#kKWNxks2E8@8LQq58lVRbJGZ+3) zh{s#iHjYZ!n;q_t#{?^tmdPQLJDI&!D&f-{`TY?srGMRt_}Gbd_a^!I#hdoDj%|U) zlVOD85y!obyxsN>5#93xfr;3EPvlfLI6WoS(Op~>h?Rp~pjNPz zh^VR>(nO?}p0uBdaxEpAC z?7YZM4_Gsm(m$^l$=-sR@;@3!Bg~@d(i0Qek9}WPR;+&;?lV(Ba}kZ?ww#l82D z-*)f+a#h?Ua;|_jp^`gsqH$6idSAA{uf%%<5*(wmb5_;dhG`WP=^FbS(u9_tEKU98 zT^9IYx*x4LZ%OGc__X7!x(8k8D6yZ6TkVwTuz8Cu=sz946xAzlb7l?=p|gf@8B57S zZ!0L8G7qJRCm4_x`UP?CiJt`+4e84m9`x}hQQZysA?R|#EBJyL^`jlLkTo;ex1T59 zf`)^u_Ky{ctP#P-0)m2i%M9woO_alwo*5;p`EjUN zOE)uig|S@7y}wgKDydC9nR*G)NYL7gCdJ*%Ad;pg*;cs@9;$zs+6w|QK1ZXmuhU{z zZuUlQW(6WV?1+i&(jE5?hyer5?@P3GvQrVgH=v70E3hiA*qVfsJ4Gd$V&cXs5*V)P#PQL_%#fXL`vU}`{Z>tzEv@hrqnZKM``ZAGGf z0K$pS0Z3%owNcpxgxDu*Bk!Bo!_tD|S=IihRy@-zkp)S4%+l?%xsmI%CmI8SF)OEMA25HxR~{1f$QFB?_#6kT4)o(Eh>sr!`;w@O;~gc z6FAO`&;kT%qQ`satbN$TL=1Am1c)Z0*z*6RWq#M;c^WysetPy!)$PQx4&7q9AwcUg z+}7jWQiY?QgX4~$ULzZEg{1d-tIJ?)R-kG*2Ez|hNvU6+y{Y|7@25k6h18L5u{TxcvIgoE!9V)J3Vy*m+xy~All}oxM8!qK_^^$%5D_b} zxB#5bU#z`wbxvaQ4xZSObFy1dMkN#^M5bL%$KNx)it^INZ3$ySAi*V7_c*|x;4OgL zf-T-Guo_4aU63A<9u1gtW+{h_4%92j2bda-9ovaW?^2s%`UnjvbTf6^cQ=w=x8q!! zcI$#Hl@Rdf;K})3R=|?O~s1(_M5Hk`aRuumSG}KgyUKkh((u^2es{a)PdrSYz{tF63 zv0$O-Vg*0{g8(rkDy77Kh!K-VLNy1Rk`-w>EfE~bNz}zz9LNxLMzaEkYw<7A7~Iw zP=Fr9Dr^<*83K7fKchA;-8$(4bWLRqj%`du(?8OsezJ~FZ^Oc?$7038y2hTjlK(32 zQQnHyvEH4o?@kj05rv0>(gKwbrj4X_DysKMjBj@3Bjxjgp~>aDhyM39u80kuh(f2` zBgR~3uFS!0?Xk(aOq3h63uq9-U1-_n_PvCen*<*<@)s7>sNlJ}P;=PjA?FCYL6?6T~@1!4x}b z*W&yLWMFZ5Nc@rkhX!q`BB5Zt9)xt!VrRN}O}$BPa`Mse@!`>i$Q#{P#Lcs02_~l_ zii>Rv`_ERYBY(==Iji|j9O02rBM=IX%5S}OtY+PX)ta)0{kHMA^OzoY+B@BW>0PZt zZgEb1ty43wGS|7DW$;AdrT?_@W#qDQv`p`m(>3TF>Q(F*Z?DF-bsNEp23xF;8HMxa z7R7b2tMrE2HpjQjC$mMlCuxY#VHoE;`c1A(pBLo@|F9&UmCr^B|LhXa1mOOoY1+2T zw&EN`JBewDYlQpnrM*Z(!&G6ZCamVFrk27g6>!42}6&S5oq^{2I+1w_j3O?e&t@ki%)uL4xE1?vfOZ|#aVNgyj~#sgoW4AUeBuc zo=ekKMo+5u-iP*X_uOLe4>0=6#;3Z>FdR6k!`R9O6)TG%u4Dlesv#c#5gHf+mx1<)@THYb`CoJz<-M zmqlt_D6VNfyGVLOYE0cna>mEp#uY(+v<<1Uqp`_}$C|Om50IPMXAjSY*5x#3qoH*) zd^z|nlb=Q`y@a%k_|k!JCL|;!kO$XH?#rpZl*V$Rcid;Bd`{I$AIg!owhc)a$(e~W zNh3w1@$w=wKjm4eQ-Kzr-E-cJ(6i!6V`n+O2RG@0cCOjST{Y-_&Uqy!C%^6I^Yr{? z;bpM8yze$^2LRPID~81MEdS1(Ts~<o1-Tg^qOFZ+9Gb@jj6gM(rNe!3$VWZ3BLECw43J5SXv{;4V8z= z4S#-~1!3^d!kleoRE{buC}UK)=c`A_e2l#PSt-5=l*ID)V8KF&0f>GG%>^CMmQe6Y zLFJ^eF){JmhEzJmQ0akwo2Y8pr3=OLZ$dDPuURS|f90j5GJGL7dDA~W(7NSJHV>B#fV?cHq@uH-+3!G2S97(El;yHa%ub+>!;@%#D+d1YVe)Kk^XSH*6leC8}!wttS{%^$oVcF~|FU`Hy3;*0iJ_O-SY0itR#g z|LnRfZ-ZDX8fv93+L`XoolIAn>;cIad{b-O??!iRJ*NIwdCskZ0FShYaJ7gsj|Ba+$Mok9FRAR2SKm*EWs!3xwVPT%`JC;M z)FZHCtaFe?+-vC8!Ii7R-3@!U!LRzLx#v~qg%Rp21Q?ILC1ef9ERS5QnkSeC)5vU+ zeG3jjzcFa}{=+0RTo^f3yfMQx)pEsX#lUUEex0jFLyxx6+gMY-jqm&-zE6hWPtjrT zp@0Le1v`AABP+2TKW{+mRuCT09vadK9*uhpZInwGuJ6C|#KD#*ZOXhHc#e8#LcNS| zuJX@0Oy=#+>V;>3{^NfI!xK4mKzDh#bcg$_(TT7%^>1SV`W=KHYAU~hgZ3A0uCyZm z5M>YE`zbr+pPnZZ>*~j>N7)PNi|=)qWKf~@c8x8XJmgKGF0@!`VK*N@%mJii$APwF zeBsDwfV(Ax@?E+b|O8G9u1^POh;^)y^UXPDv2{ zE`k9&fAV9!)}ks;QKA~KxzE@D**3$~8Gof0Z;qSYeRPL_Q%i5R?-lCa752w4JSDoR z#?XqVEhtid?AUAbcluaWF-%Wzi(ck3i*v>{s5Uk1aoyHeDi`kW;A647kG$|_xW{6y zR&Z=DKR*5FIT#ZdZ~qPHzCZiDbPgn^USuQMR|*yN#%j@Cwf^XU`qM1iOPgX7<6`Kp z1(!_MEXtTeNy*5v_}9ug%2W81zoDPFn!cKG1I$@f%w%8K3#xJ*ob^g81T`<7hd7mr z6S=l9q$7fJho+zJcF?v{57{sJ4#*WWn%GP|Wi=G1m=uBPy*HEBn93e5o~x&$7fsy> zNF`a_db2~lQZ*!Ch$fu&j3%9Sjb=*oQSCR;>e5_C$?=rj#7@dDz79#zbBHR zEU2GRfmbU^{SFPq?2Z>wF- zIPXNjpoL0Vt;Q3`{J)GzX`?$vzUANk)c1 zyufd^9Y$iZA`dASXo3g`))Nh{c*ReDYVG+xL~Xh%^1bR>WY+TH0_?#0Q1S`Nar+cO zppi`Tb3z;tfjc#RNf+1rxq6!@u1UFyWBtz;T}d;T)kn9{{4<+bQa45(kBCK2ITm** z<)*DD&E<#b*fCMirq-+uFq~95iZ7wjeyzKyH@~32uB{~f;ZI|sm?61uu4k1vXIw3S z417v4n#Bw|zM+V3@|$Ef$31xX$#}wWx>6r4?ve*Y@byV?Wwfm#C9RY6(lrJAQv9$-Dh0a-0tgyEHmn%87i(X^7SJXS=nW7WGD09j^X` z`vS0>@Be?_O>}GtTgaBy7#uA>o;lm+K(5VnU0w)>egy;xY_*3_s7`x znLR0-cVI?5?6Um%D6L_iUv}oh7(w_=3WpWBLv98;aB3Y8K-r%E^fK)NJ=R-rKbPs>40KjzH*q^XKRx(ZBo(OjRwmUwi^n16J zJ1N`xx9gR#zMCrJz^G90$y?sDC$z<)*SGl%C-@3of~iz#Ass4|c$+gu{Tt!CgazXL z3y5*U_bIRgA(+8+rS)BL?DNxY`_v?OZzSRZA;D3}NV0^IieZ%Gx&)ikxUMlhF-+=6 zdjKl%x>L>kUa!Bp1ul^m+o{XEr`RN=+UUcgUD!NU?R8(7&vpE70@cQBzUM)rta?02 zT$g};-4r--w+@M9eeQYP0>`(Bko|mP;Ubc3Laq%e zDgr$rm(CT%Dq>oP1r0Kt#AGIn8DcVeULbKn-bx~nW}o_Hph!8IQ<2j|xSkajuA8zE zyQ9+XA!X(vjrr#1r~QJ(M47CzRqXBMJu8X^XJ^vi;aY>jYUSV9=QK-3N-)~Yl6L>_1JLE}pc(4qtYO$?_gw8aaOSP~Ua?8ITrr$l4Q~(&1 zSqhDl5f(J^af4E;D$T`(b&e6b#wJB-ZX}j-!|796BO1j5N zIE{vJ0qf>R{z8k6ajmz3GH=Yr{g*GtEq43*>2-vSLxQDNRcwu;9jR$bMPTF%z50Bj z+X2&xUTy(5EUiK|JIpn=)S6=gLKjmerqqWrCAPpcS9Mf&;0|md+c)d4>!_10flL&< z=agKB`g~w7hEeJ0LuOIcLy~u&;Xh~7#9MQzuk?m-l9%26ZS_NlHL+MU z*{7?0z{Gou&!WFdYozHC0RA`x4r`Y=O|d<~M*ro18~-rOo!$Mj0vKC4NLkubs9RmG zxzSQ&>(700$HFM9i;T$}v#B<>8Gj^{c8uCw;=_%v@@uyXQtOD0Z-`2sMcI{gfYo;z z(H@m*FJE&_jZ1}=zBppzRG2LADGXi$7RnhGoDI>V_;ZDSSn1Sg0jB&5=BM(lI$o^> zZ}D_quMx_>w(0C?P1Hjilv<lqb@N9Wcb@C!gj6v?~t#}YMSm3fF$o|Emh33?%^9(lZBWLG

hHU1tr|7vT=RK~rS5-S zyQi;T!z&J{e-&OZ&}8=We`vrOj3?k>evgC*|?Uu3Y%_^Ecn1 z-KX?S9#nDf@0|$PdQOtNr&o1M+51il-t+J7N_>tjN-$GNMyg(~B+j#CA;R8!h zW-U1AtTW^{$Yt58W;r(-o{W;af6o0J?sjPDw7q(!kYamDX;Gdq$>_=CZgfHyY}YIz zGKE0Yi!9hV{cZuvRveH|Z(;B+u2OG-OM`0Jzr8Cm8#314WxAnv{3X%Izm~CN;jGqu zcaQP;eHOpJL%Fa;1Z~$S12(5-Ewx-4p*;oSF*9{wqNl!OXArXL@v|myaht_-Q}nk<&j2)IC`iU8e0T zwC||1o6LV`XJDaIy^bswl`9*T*78=J_~^yN`04FCg|_gCLVI8Q!ux7_Gqk8HhlAY$ zw+>-dw%K48G@1jP$#^h|f(@QON8d8LGvH|vFuBQouoJFcL7#J(iS<~t5)Tca1gX9- z<=bGUah0=p;-wQ$Va;y1Z*jhcCbEfxaNW^28zd*OZ zPII^w2}zKBpfrcQ(c}H_DLbw~Bdxnj`Jl#eGIj`NpUwurWU$R4Z=h9s2-f~)HGCEG zHj8Bo-x}wFU)a7OqOR2{$s@lOr|Ks`Pd0Ej)%GNZB=~Q;YyS|=j&c~L1a2+yJp~c*ckkJ-i zhTv)Va`723U0dN4k|SsL+|I~<$iOuJqAp!h#yI>Gg0(d!4GaEkrQau4{Ry1^`7{g^0a!SFk- zaMS75bx84gO5&?8?59TJ3k|-b5c3z=wl&zprNt)e@_Sc}EMdpgCIvq|Kgfy08U3ql zvlA%Qe*0jZ__StXAr+%r@#>h%@X`)=nIQZrl_cSimnQS8RLSMsD_rABtLZ@MnJ0t- zLn9Kcj>Ea_=62`3;F{RzO%gV#I6U)kjckUJSl1KkdS&8~1FU+?edl$1f6eC74%GP^ z;h9aBIXiN$>idm@SNK(+V#dnN_M@hDB-ul;b}8{ zA5~gWgV~FdqyU>GXx^sJL`1(ojK6EHK8~Q%{22PQMw?q){cNs685+2?4>!sRdl_fH zvPc=n`$kZK^LM%5KRm(9O2PA9T!>Rp(a>~6{3O;iK>_b{a z-jvemDmMI4b0K)ZS&l|%8El3RV^sU;ObXpc1o>+3ZV_8t<8j292w$jbh~GkiqkLZ0 zV^Ubo18L}W2Hh@uC4J|x6Tg`)zU*|X!(|6(R&_E@6ZAmjl8wedaQ2gD!!6Gy7d3b0 z`?uWetZ%tL)I*u6SV6I~{j~=5K7&?}Q0(kqHijsmdH6X0hT0GhqXJz2L`dn=QczhI z#pt}%8O21?JA0(0EDFPkqW|!&fux*7v0dpqjpsJ6`qi008)B1$1Oo;xd^P#iU1yH7 z)flYmA1oN?svtTvLuej677R3Jydv(Oq_P8|s;r7I)-_xT1hSQ-ZQOFYSzWvJYChr2S(!T#v-#>#gW@+Ix)?c3dx0qGDK|bL=@4Yk->hU zZn;9C<0i4=D_v78(;hj@m!Ur|8-HH%t<<~411w0NRebO$G7@r^>C0L&TUurI9TT$L z->Q=L46;=tcFE(EUrPwuAJ@={OS{#iS}@h$8Md~A?~i5TPYR-Ta0INHr-*zPse0Mu zxZJWq^|;ds7Vx2$_-;qGOwsB%Sv9mR)DcAVK^jicKNWwLvWBVAI{YDonz;Hg(7)=b z06e+nF#CfVr9Jp3K+k5}xBFk>?7Qxeu)gv!uqlB)23w@E9MwH2fO6v_w#JaY#KoMw zhXGA!?N(6CP8F&SB&|H?b(UA4l3dqcc9X;Jq>cPA38PmXCFfX8#V_2@YIWf1S6_wC zRJl7h(&4%^cX2`7zR%qs5=u+nwA3s`1khU8diyukFyyhU%Q&aZzEde`97iJBO}Ipa zQ{dPqQYGvKAB36dY~_DTq48D>Q}c?N%%r8@M1a<}-z1T7uB>$V$SVW0g)%5w#$_+f zhJy<+Lv5SWma?3Ix5_QJoWZR^EGBRhx)`4t@+Cyqwqfm8_BWn=W-3Q9uOAsTY=D&Q zo(4!!X;I9+ZDW_t!ft&jUKs*^IlhUW1V3H^MG!`&O-nSNkQcK{`DGd9lLU#EZb>d{ zrYuu3?=FU(!Ojy5A+uSEQiUGV` zytjc&$i@;$g)&JQ(Ggy?iKv01Kt@fky#et@o8k+ck5Gx*cP*f7Z+gS4P`|8S&2qQO zZqfK7jXBjHCY^Vw0&Tyn3_o#d?k zMJEbByQSOZ55iT$xrT;;^jsh_q1B~8PwIt(;uX!lg`ehFT~^|Mc3eo7XBuxYr*M(B zm3zYU@jjd_JD(+cxo0MMvCnRMgArYBh>#nGczIrJYO8#v2?vWZ6t~`sffJZx?D}7tMC-71x0dp2nUcYIg$nX&VpFT zU+z|f!aE74{qH8g;2jY920QH%h^5Ov1Bm?fieP>V-XT{3?3R;Wp1?aMhj=Vf5Pf3%Q zTK@?DRU7X3rKFSi(fg5-ZaRFT8o!^$Q)B;geszLKlu~-cek*Jc9<*FFj<&`coL-pYPZ1i|mzeez| zaWkUMgDYWSD|r#RDem=bbS;2;Mdy6IeFM{i9f(sV6qsx@(>TN1G&5!!%A!>%bzg64 zd{ek+M`A}naoybq%1{wLb$SX1?&RI9XP{%a_+Epz9Qvdy;`bj$T9f;|^_XIDzm~Yl zu8#R6u1m0;Jl@>oxNMg`E?Ij}Z(8r8Wh)JMJ#UOu1=$cpTFGB+F!M3KM;qU+xuiib7BiQAnqU&vQj^ z^~o8nQ~djeox$RgtL;GYio#^W3!$&dfajAIRf|NWg0mF%Juw3k^IQYkq}=rnxCWll zPaKNo^sFXQq7BHajp*$`Sc9SA)nncP#;q6alZ{Gwn;RXmdrdn=P90n;;Zv8iK^QPh zmt}@Q9R4A&ML1rW8}Q(;&E+P_P2x~n()ubTC4vFUN<#J)L^DADP(_fA5}`ky^kB|I z;-h$5)RRs9__B_;l)k4=+dU&HAQ)#zvgNFekKx%%$sClo=o4ki2$_PrQb}&#b?V}LFYF36 z8Gmj-5hsTnx&-*}*J)>!+$0|?2JArkg>KZ^_b&){)nRJ}N0@s#s(&xCv?Pi0j<^$( zDUFV*=1RNz;qf% zV&D<$3ChrPgpFv#a^f;QeZ?ICqKm;B0dFO+dvJ=8BM!($UDt-60-F!*Zbm$^pTQb; zZ~V^m^+FGyfp6f!J9;dMeYTH!g7UNLF*`U?U`pOUn9?^Yj8yXoi{cNQAucs5O{KIT z_HJO@>!Z>{OUkl_x-8+qVfMo#an}E`O`B0rc3)!r?T*tV$L2j~M-3!ey98$O>vhL}XrPmu0B^<`5`+kkOCeXmmWu@U^{u_!fLJukI zhR@v&cH;&m0)WudI3DN~a!z@(U_ST*X0XJq!6xF;R-|#`Cg+q9fu&VK2=)q}+Nvny zVYvs_$ZX5+UDd^Vec@&a#vmOj8&+dZ9(1;r9Alu=1jR{U=<)hAL=aBTHe3ubmLi*5 zN<`>Rx^0Ih#05F_JSkLU`wuwumIj`si0&|cEsvyK)cD?@k!>}5u#23}+b+wMtfUM= zoh!esC6Vmua)rzHe8h;%*F_FJp=OL3Coiii4_)_TpRRG&kOX??53+2CU?W;#o`AgF z1}gwuy}q3f4`lhC-+V#3-$~WO#cP9|3alsO?3n@Qb806vk4af(`3$bl@Q!{IRH?UY<%hoXNS!UlmwqF0_go|*9`R#L?yeb z%xIOQ10Eb4veJK(U*+(dctxg231Y@T2Up9xV)1cy6snjDq8d4rK>M>7=?p_n%4f{m zIxQs`Z|&Dgmstxc2=O(ZlI^-^#WbzMx>-F%(S^I4iba-!8#>1(h2e(WO8%Im)c`l2 zy837;I{Oxza?mf3;?y(&`Yy9D68kcoq3l!k_{Pk8k=*pcNGJ$J%t~VBuH6lXohs?nTWQPyugl?Le3)l~O z*!Uxw$$x@=(BU{~JjsfcSSKeg z6zRw`43+7eF7x0Kl*3bQOJBz)S83 zRrJ#H;bFd1?R9D3>QmI@P4=nEL<}N=1ux|us^-P8lP-8Kzh)$g!`gO96LWqwK#uma zks0Y#wk3P=Iv!<;Nog+eDPRp<1j-o+7$k(?$398Re!sk`zQ)7k82GzvaOxmhavcrx zYrVf2ZzYLiTwmmOt9!$aXxpGd(_d5Oqj*e*{L|Vy`l}F?4zPynAMT0^_t*0r495s( z4}H{CVt8DV-!m|RQr?zB^6mbJsFA%F4$jBt&&>x|f^pVx4b^PRB>>O@!(#g<1jS9{ z=`<=|4^=hnUBuKoc-GVk8|!^Wb>?9@Dw;yl2HTWyCpQAyJ}7HD&gS^bHKlvq3W{4A z*Xh=daIimq^|;>z3CPLh{Ub?>nwcA+ZhxlNbCtuUR5iN7>H98qmafFOGqyM-A`k|0 ziw-s%>3UYe6|t#;3_$Bf8p{8)zUIuEh(dtEW-cJbu87BH`N+rw6lN2%C3gn5ld z79gO_kW8#JXnJ*;M)lRW;Yz~Q+CRj9R!oPO@60P7!)MP(7`5!6M@~}5JWZxt?RWb4 z)E`l~E?+g#0TSc2X3I#DCi^Mp9czs(9RllM$buIag;`bdMUq_#=)z1Emc`JT%XOL7 z8u@_w9xZFg-#|%8*jCkkqw`Aee>C)2kjT)Xf3ksG+&E9@p?l#MyRQUNR0m08#g5O zu{+xM*fGr6hQ0~Q#o#BlA;~#dl0LmvHJU`}(1%f#4F@zZvszNZdE}FRrJDJI`iPa8 zyVx6&Mooyk-v#Z_WR9byy;4zQ;`IpWnZ2hVU;Y@J`@4G7**f zcIs+U?yrOdIW)bQfAa2*D&$1xu?h}~zFbBm{*vDwj=}t{Yxew!366``w z`qJ;BwPjBac(0NLkKdmtDB(ZqK%Z(BHn5B}k4R<7EC0TumG(><>YV16jtpTh8|qA! zojZyC)5Mz6KuLlPM`5ZyG&@y~nNWtyrHz=$#=QyjVRq8N?cGA%u7sQuP|gi>PuNHe zV9&|F-X^dvyG5=;BLtYZZkT3gIq}ZVD{#@%M7j_!sx|uXxs_S#@usOzS`>qr1`lPV z6Y%0^hpT#SnB=C#tP<`5!e~m;oUx27SZ9QOE$`C#hA*Xg1q&VF&ql!=s@B{(E@y|3 zNyG#C7x~gBlP+lJu=*BcP@WtN=clm8?Y2}uHu~`=JvB(iQQfkO$rg^#?{`j92)_y` zFEJ=D1GwMA}k2U;?f4}v*z*qdesL^ z%~ID)Ny-a$6^}KBQ6p);VOYPQi$VMVBV=Z&e?b#HsvIa!wL?l=!z2?9ajyvK&y@FU zvWfLa(0i}me#K|E2aZ~uTK3# zn4Vc6)hJ`v#Ry$p#!RjcqN)o(H9Zhe7?hV&UdtB)jE?Nl=B=w+t>QElK4$;InnvXx?Z2G0rtN2^Bbu8ZDd|yuJPD@bZ z`klXEzM`jL3~GW8cXAvh*rXG%<1}PVE-G=%Q$8ZAlkscWq;xDZT-vmzh%$sJ4k=fY zcu;&QGWPng`XMi<&v>=g;$ZqIy<_61Mr(Gkeac20K4F5g#0ZkL%;1jouM=BJM==zs z7MW9b5e*V29^=WG8fT03HIdi&^krBv+_^DgHKI2@O^q!Ynk0ucj}|>}T6sp0af!WM zikgs!mfz&eeZKI(Q05WJ<91VvYqw?`vAKtIfTufSzuSwLsE9M59;>_>wQ?K}Ycl*E zcu!`J)1b%z!-Qzdy$YqTGqb-dkO5IDt({@=211Pu!4SrRlKGBkbut9ybBbxS9JbMX zqSb$Lh8>E&qPofRN%tMLWeIl$NAGp&-wAUc!E-+JqI9408 z%$nsFIrCEM>g{>%ECfF3%TMDnCS``sQVxsIxaVbbj+6E$; z$9>l>G(DG`TYYae|(X$GLQww3I=zt7|>z`e^JB-@X z()XprJ@q8-q#zrJ1p1CyxFqDg9<-%aF74XKPAv*V`icNNu00z_-m--AN#rgjh|Rm$ zy(5c1PkQ@Q?xP}e9k%7` zkS-|lJbdFD&D8)JTi@^$=H%rkwTc?c(Eg5l>!$_Sd{8-Djl8z$j z+6n&zOj!O07TJ9&yTuEV36Pr}tb6)r7UV^Y*hrvfRqN7TaDGJN6Iy8rw`gJ#Ywx>@ z=ZFtd)vvH{e09>1j&|Ay0!#&D_tfn_O0Lx!E#lWv2Z$K86dZc4o`=&av20f<`HBqd z-F7*kpX;tBb3NBvlUB~O&&qbsPfhFdzpBi^>=AK^erngM+lilJ-^s|Sq@rcXhSiH2 z)I9-g#TurC2KI6Fyr$ckIBDS5QTTwS9*#r)w`;YE+=Z`tjiZ4+Mh;V|K5Uh6GrD}UvNW7lx2v^+6=-5( z10+?!VWDkDUA%bC{fsP8H4-)zH(Tk1axdY-N$^A=5@$U0OW6Wb|gSw1XqUOV; zCbaUa^w|5c^Ybkp!N#w`J5@I{^b&SG6?gVkYEN8wS}+bpqGDIDr?|+&{ww*dNOJ(r zeDGX&433V8G}v&q$HSZ@Kct>KuR>)x=uM42jL`L$f;K-c^CL?8~)Q$Fl%W_I=)BWBEmSBOs zOFSjg>80h8o%qDB4THFV4_(x?G~l@`7YsFKtU-UM zRvixu2?1`}Zf^WvO*w>2&PFdU!K{YBYO>FrK_zl*CiPB6h+@;(6y+Fgf`+bgXNvW8 z;^)*4LtisOv<8mXdlk_7D9>0RSj0K}(ZJIC4D*2`RZe|1OMgDyQ**4L`|l&OZ*3)o z{*K1!xQM)$?R`|Sdg7uU$^E)%;g5UHsH%|s34Ea^iiWUq%{a&{O)k5m_XqYPa0df} zA%SbNZymOwqd}*pYGO;_@*&9hJ__c8n?sBm32QWhVY+XM@I^3JTPOw9xfGQ?ZFC>` zt#*|P8gvO#S|M5?_@tzGba`BR&jX51XNv1UG(uj&nJ2$x>>F`dRlD|Jyo}HwQ4?e) z$y|h>Y7iUGiOxV=KXN}@QWiTrfJzD-7ouE>!ilU~sTz|!J>~w5?C2mU_kmMK$#Ji7 z7+L_)K%ZHi=ID$3aHRM?*~f3g-KD{l$JiO+mX4Y$+seX4foyz~_);=%dT zBO{4=RgvqVSM4lGH}xQ#?0L=lg&9i=x*79VWp4Y!M?z8d@q)fFA9Mz%c}10FFD_{g zy}e^U&4}wavWQo>6@3h4F|UD929RYJb;4jUpfX#acoFItSC~q+3)Hnmx^j_RsK}Q z!ee--tDd2kv<`+Qy^0H)vXcCs?BP9g5$B3tgt?~<$Ek)bfkSWeNmYI~@bvQhWvg^E z)<(wwfx$d+-OX|G2C{j1m})2Nt5Rfdqi=Y(H-xzaam-(t(EcHRczgUS4Br<38W)oF zzn6mZuM|*he|07Qj=}R!NA_Q0usQyJV#tQOu5E3Z{BgQ>-j%339^CK<3 z+?oarug`u>N)<^&LM2{Pa)0CAfRaanRo~@2nf6&5!LW0DEUx(wV*MNf?;OZ hFznIZ3?3-d1Xkj@5OZnp=6|a?l7mpwMIa$@{}-aOh7tjnR<507GV~GU@>V_B&At6Sb30Xdhd!}k@*v`NZEP#`FKHWA3?ex zK5iZm2RA=RAH=2%;@||aNr8AcdAayOY)T+55Su!PmxqUsJyG8q281k-Z2X5Dnujkr z@Q(^0Yj0-m`PTtIh?DEzXjyv;2M~tvk;pXRLw=gvoVCUuHGv+qs<+0$h0RNvJ1pkjN z$y@J8m`($=hCc*3AxE^qP>1{u2w8N-Y(@T9q8X&+hpG^>2XaG2jVZx_(uN=?B^w-Z zC96Q|<0JnkTnCdc$il*ZLp8QEcL8y-Cr?3ByyHj~g9RC>fq1w&59 zWwOsPJTO17KX!s8=xS;YPB1nU4HXLK30*4L2?mFN^KZ%icMK8^_O9mkt}Y-BuH>_3 z`giQfgx_cZj{g)D3bp1P3c7L-I|F4wzuax~eiT}K+dpVl_PyYWhG`BbV7Y+y4|BfqJ78V5Xati!k z>7VG&+8?)I`Q2$Ay96>8(I;W6;6~)~PcyL2U|1zX!F?(?e1l2tZdA_^qA=<82us;Cw^t&eW`IDclkim4!t?V<^W+Uvc zU5Xj7YLR?99k|NSv+|E&>(dyf$g7Ldub*!demsG|I$Z2C`R%_fJyCw46*hdh$Tkv| zITI54bwIO-zR|Wat?WU9;};ulRUrN3-cPg?3RNd8xovEBZEd+=0Auwo31ZN1eSz+89kqr|yU7x-@?Rvyqc^hYWi*}5vpcP!W zwREsvyD#_Y^WNAl>f+=~DO`r$`RU_qU*0@|S5JR`eZe;Dt0|s^_=OXowgn)p7sdlZ zvm1^vr-VhjLcK5R(+zlpv!R{~mM&hl+Z$WKzt>qN9&eLr&r3=cXVKbRDliW8RZ;HT z*CvkHGY`%%>VE!E+}%2Wb26)9P?4SYZ_!!%YV|2*PuJglU7K z2pu62VSK>sM=?mxQ)23ApW*e^6;^i~A6{c;9dIBx%~)y;k_h#AMP%J<4YFGeZWW(L zb2rsDc9#5IzN_VIJu3?}>j{IY-??}daL{e7fbU>3hSsqy(=VMI5E?$g*To97S6i>g zO;^}0g~2=>!LMzu7!V|!;4j$^KR~Fz|3W-az1HoA+vyJ&RE)-Vn-dXzoLEcFKwffh zg!&RQi_GTS6ghI{!|p#jh}B=UWYTHG1${8CN*(Mi8h{lpo%gbYq=YpU^zIj{H$JM- z;1(USRnE*R0m~MlYq68~Srh?)a*zF$rTbhK3mjaG#U?}YO}H2wP~$XWF6Y2-pE?s2 zu#LuMRl_3l-sxWY9!9%AC(@W&&v{dl5Tv&(24~(V-yctjkO%N zh1a)};0*mPnjpTd+LIbKTO}tFL$~nj|%?r9ph=wkWL1p*Iu9*=~lYCl5LLrw7 z#)ufjLbZ*r-mGubrQ@ESnVE+VrlqnP9M^U2@bG4ft|pH^*3>it3&HpCp9%P5%y!cH z8VYFsK;JEbQ!Zvg@U8(`lx`kB6r0yu(jZE3T5gKT*%0VY<&X}Au6s1)*IIjSQrYfB z?apGJIejfrW_=_cij_zYkN6<#Lykj_v>6nj!Rdf=#&nhzdH`Q6@>m0s8B?3Q+HHBT z!(>*lMZu}hyGNu^Vd%z;$Ay$(ex-YgmQVx{u-TH_1`S!)C zF-@UkQoV$8RqrsoqS{xfa872F{&2B+<{9n?1ir$OWZpebm$3UNuuHuK!DYH9XOnu~ zdwfZwU<55&l%xT#xd0e6kcz(_Q-Wza-H1)$#t#b<&bF-&U*Mo|4g`Ntb+Xq{8B>^{ zk~Xe5M!&tEsucnXXy_WqD;z+H&^bJqb>}1G*X@*C@G3}eRedgUFk%NmFQjP()9r2V z%$?NFqK84V&GN!FW$U)qA+sI?sw{Mrp`-Bv;88MQP<$o#OzKtemW zAr(ATv0JD-xCOlo1wM-DMx0JEvQ};dAMH!f{fRFn@uW7^W>>S^2lIVMk}*p}(Y+^^ z%CjJ9#IXi;kNUz*__WR=y7MB+2i1vL@CKnsd+?%O`{EywaS*+Dqfic5Zk<=)48c9-#71`l$Hpbh$0GePHxY?%`h?E6(A|hp8C!mRkx5%q z*`owkBs2w;Ye2R<;hERNYxC>&6d>`VA$JJB+0g+8qPm?F-UuWX#1NykCOGQ%u^A6m zywutw+ysN<=c)-fl&y?ZhP(STU&7nfHQ3k39b8V=E2$YyvPKSy6P%O=VbrD=${%vMVx&qsN_kBC- zmn6VQFRz?R8U3ljiM|ky#So^tH_tYc_Ddo~P~PQdH7$2ip-r&_bAhG)0NBBsQ6_=3 zbNj>7gL3S%C#}Rxi%G6|VJMQyOt>(7=Efxi8z@iwbmgvrXvwVrTPOw3ieZH752ePi zj|INUl4au0?-D_-qiet}G9MO!diGQ49(%yrkFhYX^qedke=hxu z9I=QKu|FkN4rj}&w9IF{Vw1L4Lr`>3e@KoBd)#nSieq+72=>GiuB688<;KM|xkCu* zH!|)Dn!0V1a;2w2sbsT2$%NwQp}e2znOIPneoWj*C(~Rz8l4h%AMbyshCU%b)Y}8N zDozem9Zvc2auW)hO3|^$es6uPR|xDRJFn)3UB;66O2b>)4q?)u12^V6)$ru@ z*5f{+7?#~eigYaGcT#gzc4=~YaAP^6xwSSo^rE?3wXruz)wQEL&`#Caf!|sY)*#gt zN4&>~IH8(6^rLjK7RSDSSLO)@c7-fo0!;@_)|n9T!*akEo;YV^rP_9x3eaZ&J+?5= z6deYh1$AnGS0qHNBe%g4-964SNRm^LEn;jlcZ>URHBnQ2MSqs0A!e1bmH8J@68n}r zEYjyoHaZ!1`QRaKYrVYGuz2_~CuHFxIp+h7s38!l1Oc)*>;ZK&rsbDCwLSfVb96zI zU|>xn3Bnj3X3*Ho>vk~u95Mya90?ECVB>IK3PydBcQ|&%P33=L2g?Lu3Xm`_X4lrM z2Tbg&4t8fk#tZBiu@WQPyC?_M=oubhMT8?rw`^?jwE}PFm`n_EdSfap433x}WPZ|( zGy((V2cg`-2r49wasAwXqHKTRi;OtHh3^EOgnuNWj)wXSn_Gk^`>X>PCD(E-a2E~_ z8<{+z!PKh4;+w%FU`d7I9Y_*Awv!K{lYnUrYPgB@MR9j&RR3-ngJ0~p@5?Ajvucts z&=zGKic@c3mRK=lm`@JMG@|U*nAhS-AMYzAUH4pGkxbrcVWqOEHbh2aViR^@sIznq z;N3&@qeRi~#}h`p3Mv5aW9~o1m!Rg?Ka1PmO9f`xnQ6#SKchH5PT8!SO}z6puAY6^U~TuG}q2p+u3_hmpx55QrD(X6pSz>SU7q}O3Sn^ z)S7)@49kAI}{itZwr?>YN@3FOB#^E0Zc2T{{^- ztuIq2lW3sI#MpT_haGt_luD|Rvb*1Y)sIUvbDrr#kgNpv`l+Mnpr<6+;3hnX8sBwG zQ_w)YVJ9rBmxRiNx~fnFoDo|jwV?CMRm)O=B`J3LwZIMfNA%p&0y0$=3C^z7a$4F- z11?E+8lzLT&%1=b^$$KF4Eg}Tbwig>XD}lgX)>{wX)-9I&wM`^S1u&UWf@n+a*o8G zdpgeSf+p|?TE(o{sltQ)CL0NVV~)9ud(h~v*h1-|96cUsf(#NAmkCjn}JQE%fnI3!DJHVVC%QKpY+ zrk$74C-Za77`ZgbO4TS*=w;##9hLRh)}v-;BWAJpMgo&h7qzW5-JV6ldcp(lgSL)c z-)h8Z-m@X(mZ`B(gtE)0b$6c_wXSaOzO${RPA6lj?yDM)BTWJ~6CB6~sf0vOB*>5~ z@#Y=Se?%1nCQ?;`&;>_2&5(wuvxy%+rb#}_!E@Wqkn+FznRc#c8JRNN!C0BVcsM~H zmw`=U?te>Oi<#Y3+c@9v?$1nx&ejKFZq7r6*Z!!jIQS)*SBpuizs}tvN1spGygf=> z>UUTIgHOT8u6(wlYki-?4N>tQFEXFwu0*p@T)5VwL=smA@_oWTu*`>aFMP{+MuYZ2m?vzVSfPqa7u-pLs9 zS$Sxf1Vp)ZdGrf-LJ_QAL%L0e=?!=wnm(1>FPw=yazo#TTM2worGjAd8LaUTo4;3U zh1E6b;XToejeV4^n>`#dPXBRO{H-tgsIS>m}Dm{ zQaaGW5%ju8_{$9smk~ww&F}PV*FWaKV@vn-sn}DOT0FdTJd&<>+ukpUek)*573aj_ z{V|v9|G8^(h)$ONZl!oV6k=oDhEq2{YNzO~kJXZ0O$NKQtHmCqlwDzckiIS$OsL}I z0MN>5Z%O0XrAB`;5=J*P3&A)kM9G=i!NnlLEiPVYlsBn5$DysS`!qzm5`i%=#o>X7 z&GVj;`dl$d7DPtV=5@J_WYw)5jzqS*@{3B%V^z6DkRPE1^Q8zwvRtyNs)%-cLvrNv zfcJs06sXdRMx6QMo;2OVDu={EDivDpC7=_p?q-rXAEnpTr0<|dbU$XM|sqdA#q?`om2M zZuL7guH3hDZ%|E)yr)}H7$bqCy}bEbYeIDw$;M0$(Ky~{K7iX< z*aO~lt8&oZFs;dIyktKEt?;NS>bta?K=k#wVdOLM_z$CPM^wHUAv|s+;6ryCoBLwn zz${H-gCg%o)s9JNyu(|CMMb7;6+Z^SU^M-4Y3_KbhXtlTIGTSXXXqOQIYr2Dg)KqY zh&3~7j4q_6Rb}FyI#2p)`UTj!^1!ERL;k?=z!(fazI#r4oA1{LaAg!LHi-BMlp}3X zY`Efdzm~aFbeoV-{_vCQ4XN}lv8WD+7{9?Yu_;HiWhj^oEidhAN(>Gym5t9*QyvgA z^io^vNeUz;vVvZ~anwgBxb*jy5; zHqWy57`IfaB1^wkmWgO~99cqD^~?}LWh)e7emn7AUjexZ(o#E(p7la$HmP2XK&op} z*!M=_a6e~|M+LAx&PHc3sW>)q6sPqat)Mh;L1BFPo;s^;76AvMeR^Jqf}U9e~P9-2Mk`HF-*+l_5YGf|7FnrH_gQTmudS) zadGi;3;g9G|4X)U{ADY7|8K&n+sija>XO}HOzstT*9r+8A4qauBQ(%<&0^W3Z|XpD*`VCpSzxKP)2{cq{05bH_AbOwt+iSI z;mL=u2VMmip@4#?*S!CCXGyONR0$8%s3n_Lv#ELe9CrXr>CK4&D=XnC{AQhk&CJepwFV`asBturNF$=)OPzDKUx6S zE0XLXwd~i>x+cj>I8>T0)bH|^W^dioxDH`#qYR9`4#c+8>!~`vtg+#C%Fqp>BDB8V z)5m7^ylt83Ki=E>`A?5ErD0y9Hj<#AclnP_uMF({$f9q@`4%ic{vh=3s2c=N1LBkm zfvC9EKmytjRL2;9Ev$NZn#h%Kuk?N}^K!fU!R)iKK+zqqZXX-XmcaUS==Pp1oWXVG<{r0)2+`5SD`OJLtH_gnrl4>UUQ5*rESAG3fz0|i^2NH zGkBn<5^(!<@O4_^+$4WG%Cll$5Vl7eI5x@`L%sbR?1&Zjav6}@W>+ta^`aBm<>xz8 zpsz{AU4+&GYIc5ce73cBxqiYhYk7OOBNDTn;?l=K2JLuSWmpKan{H$%8B_cyHiWq3 zkM-*sE2JDVZ%+_4Ac6}`(opC_kr4EoIB6Va@VOwwO?Vyi3#Om|Mx2AVe{zH-5JN~fOdV)>WiDv z#gH(jut^>@U8+p_e2@v5sS^|iuw;VS`>xd_zsYUUIL9AOZ3%&0${Ve0bOOsT{%R%I zji16D%0&xaf#+JB_|chheBGIN9U=w<=!KZ-PmoLMk_N1V>SN4Ex(AG21_!d4f!HQ6 z=J0}E;lN}c>wXP$gcsVq*}c4>&3No07=z~w#}T04kfT77 zog>H~abkXCJ;gxm0tS=A2WJ^LJiM9FHUm9H$gVQBya*a%_=y+S3_|=6DXh*lLzLOc zvj-oCx&!wu8$^xbq?amyVVlh*Bi|lWT`WT`%>42p>qqO0N>LG39lG{p9J8;K?UIN2L|<2Uh( z-krNqeh+=UvNuh=vWNU8_yy7@-?0{0!++M?FY--u%G_53?YIAmrPFAX{}JCp?6~0U26~=s950>(X3Wu$z_B* z9cjxjrxh=d+nQafSmY=8M8Irr{n0_jmn7uRb-nr>YC2LE>mTsv70+B@1pDE(35%mm zaxVgPXTuAx8LU7+v+c}jYJ^au|HQkG@Dx=ce1jdLTEzP@s`6wd90hzg=6}PvoZx zPZ&edIZezlQ`mhzwH&%02w^XQjvEH#M~6X7Crldw>4#pwP#0~);)I?!5kmgJuF@rh z^M(fhEdKm+zj2(U>BZq7bBCai|2D1J*6h& zSZ=uOQ^Hds>~Z5vbMY!y)bUb^q1Ps9Ua`Q4fQ-tyK zba#EE?da3m1SX=yGl}%3A;@d+B~36GPGFAV_&|C9-Bw@+shtw!N(NPXr7z*S9@El| zx*d2FeXZixUK{NFMesGt-z2ZgFa<{#;Xzg;;4Umce(2TOl}b2bMpm@X#gnZX#J6Q^ z&XkfkjoI?$oLvFelK{d^Z^|^P6M_sotkFjn(S!*^IzkyUL2wWOOR7z|<0tFEwP%CA zz*hjtg}}tNmtadaBp&`(7&OJv;-7HnCz6~`X;(%x2w9Pq5QTf1H?o9j)?wkC;9JRr zG;ufteRl36v>64_e($_wC8lk^eT$!{NN-B{0KyL1C>F1JKbLD9%u6hsqN6v%P3Q)UJeZ&d?`C?|W>pT;AvD`jApp&k zj(r_)q%-b)@4Pf!2u&8{>@`V~Cm;a;{O@PiH^=TJmr)Pc=}9iiJy&7E4V@l-n!WPM z3$WH+`%crmr&4WXvJK4QxWmoE8(rVGD`ZGCBHME>d9)=_9#yrla#!p=8b#jcvtgmV zuk~03083wF`MbwG6LN^I+s1$vAAn-b_Zt8$HPm8poJ!NNk3X^#znp=(;%3H7woFd+ zh+=m?eQtRJ*NCPwPVf2;&wr0hta^^*L2RXbv4o9j#F>8IgLUCCdM44%ILK6g>OAh9! zcP%ss#2`FBIGi7P#J+ob6B2ezzlZyI9AMd_$&cFp^n6NY2~{i#%x&t1I4pI15Y1z_ z`Ar&YsqM74w=aVheRccJDUbS>kW}Eqa%R5ae*nut%D40x{?9!8{-qCi5k7` zsZgmX7fSXxtC7nx+OdVZP-nJVli#u5ZD^N_zsBpQR}IVAp+BS@936)pp&dgVG#z*y zUtM)w?+(akP8Pb$gbJP*w#Gy9Q!r^h$*N%EwQ?$qsfIC`ricUPIiHHSnV3?2Ql3)7 za~io;iFOrSJ8}ndxN->u`Gi{O<|Oq^KALEmn3*UPQE{+vP&t{H*cI_b(rPE?usW)y z51G43wH?>;Yc?{?r=BsovN+c`*SOcX>7R7;yOkeNYNp%N9U(2c)N1NGw5@x*Oz9uo zwvBY!th${(oQ4AV6ZtEm6fQaB(;{NZ^mNjB^uipck{6LlIBLr3xoy{jVK~{!u#bB; z1UGN<8w3X^u=gZi4sg?ssD;Lkb}XA$QLNhUs0o`_zcjD%%&o=L8}Y{KkMcHTFV!5g z-Xb0upEXRI=XWWuvr#NE?=N_{^7MXb-$we1Er=J5UAqL}!m*^srA??w+oW|OvUu?6 z*pD!zWp4>@v|GB3!MI@%n1A&#d(8oHnPZF`c28=)`TGtOVzcA=la-vltl3=5Uz zmIEcOdGr7*O2Wj&WOFl0>yx8FXR3H@AWU>Ee@|KJGTsr1IaF!}Y9G?pgYkrZb8_bM z?aXmQ*U!Q6dynp8%eBaY2c7@MOlzZq+s3Y6tlz*pa1H$ZPQs(c+j%Nt_M7+X){k)D z@#uD7qW+p6LK|`u&h}z6GD6_j^?bgr%!OfibphPQJ3Jng#h*S}!Su=8IW5QAQLsE9 zay`okP!{{G zd^)vaechpo#`D@}zr7@MFUS!+0i1JP!|EWI?Ap|_=RERq*$(YLpxo!qoU#pDfxDC< zSdW1(ZJ#;OhodGcX(=B=r>JfpF&@ZRqQPt($f+ zKiD>zv=9iysF*{B2)h+$kf!Qkbqer)6~O%BC`<3(cjS=1bH0bsi*kD9Uqxavh6AdD zCN|~oiEk6KN2fxtBiY{%ARSQYj z>hIX!llz125`4z|cKjQ2g+I6_AdE@u7+)gpF;gr{b$sM|Mno9(ZVWu8jZYdG-7X1- zO*Co@Kc=A^loOXFBn|6La>N$9O$6vZQb)emOr{&lA5|+!7go%p*!9uiJzKKo=+_;d$xMJ&mO8U7BD+iMHkofs zcgOA${g{S&g@T*9GTOpNMe69R%24=OBVzFy?v$`uM`E$C*^k6h zjk4(9RE1*8IJnb>XFan%kf|4Hs?jM@w^PT{CT?+Oc2s1S&8TNd&s)Rm%P)>;2&DK^ z>C@^{>NC_N{*)ytwJ9YjnF8im=1$sN)|o^=BJ?86Cn@f-@p3*Tbfx8``=#lni%vB? z3}gxG>EM)CMn76VO0wjh#8F(GTmDlnpCyCUavXY0(v~F=$4|A-4OR!qHZb?xee2n>& z;5P`i2wyf_raM-15Ahp*9gb_9UM4t}zr}mXA~OXnH|n>KDUS1x4Ucti$sSTjjW?Po zmg0};Zn=csbrwX|0(1}O^uqAdzfUSi+y`771%#(IbY;~_4?l%+D`wG5rF~CkEG3## zxQ`<(?S2+PS%n^PG+#$qOn%>%hMQcvBe=?D51J?le@-N&0xadVm^x#=$$p+AJ@h?< zuFR^jAhRXvOky$DY}mC$)1CO)aAOP8nWaBzZ$`4rbF8zP2-M;DT-w<(n07K1(To&K6{0z*0H=RChnu zW6f{6`);lGC6Ek54nyR5RB5BO_H->CeCs)#*Zq{|30x{G-+*5xo2EtHAkl2y&*NX~ zx?lJCZU?d*x?jErdc0k(KT(hhUvJ4!Mp{Xg2PrKL>#h#L24HKf5*I-Zd*QB;_X zkIqL+!$R6y=S$Q|YEH_5Iw{hHQv=7WyRN#Kq^DK|4>kWNV8Tm;=AAo9EeC4iuB?{6 zXt(j5%;k16!DplUVIp;s5Jj9H^x3@p@;-&Qzbt3`Q0)6dxs$iL(lm&!fiYAiBegPx zmZ4`+J}HHq(^7|Kjzm7G3U`A;Od0>0Rt$mi#I-2k(-#MNRo@b6hMQCtJ1s*UwV<^V z&TLw0#q+F2fYUgAET)8rA)lqvoS-3=DyX1kj-!*FFi3^LP~nqoC7WPnh|OlQUg@`+ zo+#_oQkn)O%g-8R@d_34S^L2y7d;86{m?jr?BF%nd;`>C?#tyqcympQv&XMK)$Db& z1E2EM4SmE)m8)ugn}%3yw$XrBXk?D*qL;juLdJO{0A174o6=k9P=due!{U4I^3UjY zZkLvy@oobf7d~q=^`M6L5XWkPCHiL%Qa`7HooUc3&oj@pE3d9-Q!lf!qH*&%tbIQX zJVt++;o&&)WA|%sVxk>xqH`oPkxRYdVoJ{GDOxJ2`jynCk5%vgczJ3SkL~wGUSp5d zrsB(zKG07|Y>Y%K=ZUZaOS2^K8SFRH7-EycjRXTJUtp+^HOKyLjSG0n^gni}j)ob2 zhAs8IIY^YwcM_lSJ|Z-j-t}uliH|ty9au0(c0E2)v-$FkICtWa%rR+X%&5Zn3F>5G z^t`nE3UgNZLw-@*wxpM&#@-ycAZO)J6-fL_41oL9((*-tH*4c?2Yr2JJJ3t(ye3Pd zG7T?-U_Z-0?R~y1xs8%sVo@Ds`z&Oj|d(Iyg1E}W2sNnCB`lsc|h|O zX8K1COr|Blj-Ve-f*qgcuc8#^V8>Xs#7jVvi`BtiX~lcA7lZnjgoL#$qZ+6U;r7M#al!}(!dpV* zPP80~ze*Lcv(P2-so6|MUhAo7vvD@ky>#cgf9W52~&3+z6g2*bC2Kpi$}ywzjn z*lX&k#FZm;(%y%e!0M1wXU3KH2LV0dkl%R;E*%Y`{2&U-eH57%E#`ZHfXyKn(N=W7 zZ1T<5QU{~VZB3r%jTFDE2+pF+?UGtD;KbSAz(y~(MQL0A$XfskIrN=%BOl!K>V!qME@eL-UO#R zSv|=iu@Yx}n-FrKuMgLitf;R{A1C69v$oSf;Hj6#Jni=>j){(GA1v#{pe+~Zod3zT zbTJ@icmPhGZ?g#sH)=L`Q*23c(X9d6&`I=r;`&d1FJ)}zkLpvW~BNd5)HNc48X6k=- z3co`7NzyHWX#D#pRWs43V}gGY_^ZeyzKH*O@mKy@SPA8z51T>`N&UOj+icm!Z;So` z^0EPz0U4s+&v0yT%6Vak^2d7>$bWPsBnr8|Ivus3Y7q|2Ur_t#S|mv)_ZP|&4sZik z0{duQp9-Gq^d;D+QsGzi4jGeZ&*koQHu28$`qBXiYv3E=9nv#Go8G2;!G}%rO?<*9 zoHjC2__%K2kWU60R0+I`#e;{v98dFFsOL#NKw>Lx!cyrU!WH zE(9P~>yf2%C`m4#8~g6+Y$wRTJJ*DSlc^L4-DM;2v2^4&`MxJcVP7sMI@df;yq}$m zYrBk~_b3jd+?5&VmDrd7;)j?{g$vrDG^1?8BrRekdtefY)Jazcm4NuMaKW&W7mN1B zh49nl#FD2+{$Ruy-<%Znl$FKQq~MBjCcvDrXr%Mgo5zr;Q_o>*RP$#KO z{ev-{$inVYxRp;j6@{vs<~Y0LzDH@-DLwl8roR5R0|m6{J}(#H?gMwz%`k)w#>DW|yNpB!tdCy+3!v0mk|PS$-7FbYR<7GhU@(vSD@0sf1seN!jrULjP?esG6<~4x~&b zf^(96`e%y+{IwuF#Cz^$sryrfqjOaGfV0l zOdRsg-I9wz0uA(Cu8}8c^OU;-n0Y@qqFu1x&_ik`VtW~~ji0L>Mqp;D2zB4r+L+zD z6@A#+Rl-gL#K<0J)6aNI#cBA<(U9TLI98^IRGHY@N&W|O3c2AYd&Yd#!xb#U-HB~U zP*2sW2T||KX(1(dMY9P_%B#Z0qil-wU*(~Mof{1n*p9Kb z>N?Uelq9(%j#Ep~r_l;J^C!?uP--FO9ZAZdzlXmv&o^_jW~PD%w51~&sGXBW3NIx~ z(9b1H-YXBkolgjXgI)LCt3A_tRVri|Z{9E5^D^7jKWR*UHT-0$BTp?%6r*XSFvgyaPiqQ5U(veOObGZ zr{R0mh3d~VC*gv-7`I=VgQw;nT7rq{uG0PyeV{#gn{` z$1Nh>_DjlkEkI~V9yg;OP(<3PevC9FZjx-H_OmQVNz+opG7#N{q7#FO?Czqbndu3P z^o#2kJZhks4<0eekKfyC%XkEA@TWVH($b%$Nw97oOB==lW)aX<%5Hwqb~sQtM(Aw7 zGdBIonB@R16h=l2qf-WzbP@c9UBz&kj3uB)Bqf3ZOo2^AhwTjiNNYg@xRb9aO~*;9*AtusUm6H*^wm95QNiDLj0Fv(r+>yagYX3v~eecWJ4&G;q?^lQJB$u-&ywG zIBIS>IeQr)v>2kW^~G28%e0~4_u=m3!x`L*KcZGaw+6lY89>BlJ1>j70O`DcA^YP9 zvx*432LROPh+^r!-6caxTnVSm!w(cC13zq&pYtwM+t6>Z^b1&LB1ZBhwUoJ_&)~uE zeCqvc=a_#|#Jra!9UP! zu@M*mk#q5~~M9HEQO~)&x5Y2M)M01~9rpu68%NkA2;HeS3Q89ep;5x^zU9 z6n2m z>3KZY&}&}Qfn|;27P5LY--E>NdW<~D71EG_bD~I4!wxBDyq?UfEK74<59J!|N!q$G z5F!$As{~E(tAgw_bYxKZa2>>Xn=ipQDpiuXs$fqSUK#JN2pGY;!wL02nzz2yzeyU& zZ3=a%jP$iTB2`PSth4wlU0Cu%y@de%cY|`$iSQTJkpnw6UrHif8SM)Bwtn0jTSvL( zUUQT54>yG#yWIx*rgn)Le!aU}+wbiM{(7A1+YaR8a1Uw#_)-d`*aj-v2=anj8}9D5 zB<@2-Fm6TDCMgMi)7zP`v?mHHA+vxf1J4aidJ`1eh2Iz0KpskXTwW8M%4O2X>Bm4khYCjuqPbLqvgUqyKl5X1f^0^ zkas+-c}b1}?S3LkM7o~R&xE5fd}osOXyG8PJ^>YxKY3!&hp0O8-L5?P5`1%O_uc6| z3LB+ov!)g8igHPuC3@CL_ke&B&i6un-=yDy&+C-kl-(?3so?au!bMx=K(~7NC6;6V zmh5hZd?vQxv(BCJ9fJqOcIDG~Rs9%NrSzK4kFEN5kp|@ym31?9^S#Ay>4l+#iLMKJ zM32EH#IZK)fu{vnwlo4^co&|fi2a$C1r!#WRrdtAvi{fd><%((?*VIWl>H<@jH?R* zY%_`&(DZ;DS?c-12)YD;fFwqf<)OA?z(RtRd+!2uuXye`$f8*B8mS{bCVZ_q;d_}b z4Jdyr?T%>pX~>aE`{S=oG3S^V%Jd}YM(p)M)j8BgVq_Q(Rp}TrYytF&cY4ErwjdjC znSw-U;hQLLWXusSSiqGU#%c(;A0n6*qi)S-Z^NS#)c+I{D#9B7IFq zI7OQyM%IO72*DT(=!pzd7?tspDLhTBkI;}+hQMOOu(VmIam2?jTFixY>&5+Ci8^|a z)gu|8r~dLvN(r6|MA)HE{RXz>rK285}Q99Hsu4F{AWg-$PO!+oBetA zkm?`Nf@DFxh2LPr>l7wYXv}zTZ(wo|ZulV|D>i#WaXnGFe&Et48c6ui;V{L}MedNN zeC@|0M!jIyc}-Cny;eqwDTIlZ>|esFFMz~ zP(rVF0n!zq+$V~@TeU!aU&hh?H<1JXVZ8HhY~McN-}05f{{FYeL9q6ajt0@%pi*h%v&9$Kk3_(wB;shP9T>f{P#X0ewhDb0ROSx zBtATpx0pY}BVDum^U?U{_JremLBuTlkrcf5=rHYO#{3yY#8!={!6pbkVK+~ZR|vyT z2Nb$H0&%RlpJ3meJ${MYLRMc)qnYxL-G1b+D4NJeyZKvf?u9K zZk6{lR}^i;YBD=V!uNf1K6%KN$Sqq`Xmi>fIbX(GVuGOCN1d_sB)jK)+xzSLuG?{) zyS-W6G6WOr(*-xlP@j%X3y8J4rW{&RCgp_J99D@b)l^IQ&dL)ObxF zk1NJ_m>!Whd4SFi!b}z*&lZU$3KHoze(2wIkr-XT#Qq)GydQp@pssA9l_rb)Kj~{^K&jxBZ@tRF_TUFlW6D7HYP@Uqi{a7=pL5}se$HiBP*Ka%6NygUwr<#U;^g4ov^MJpX@L4h?vPSpp2p-3EG=amn<+t6%XAB0&3DmK{1D2kW zrI!igbp@-r6$U%IfV47A==nz(su{Gkj#ZJWVc&O(4UfRXd+{p7bj-@Vz_XaH+>*J0 zODa0rm%ifsx6h*#9FGzp{4_5;K`-S&@=n(l1^m&oac<@XJX})L1`UEZBIvM+lw5BJ zfmG{k&a~oAr1RkZ;NoEGU|m%Db`7P3TjWHp)2H;YpEMhm1w^V8J=IdM_|6k7Sl)&G zMenIx1-k)O28OJnA?dhLkMg!;-sE6v&&MdTEBid>*$L&1Z!U)+D0FcLDRqm#V1QuaE&v;O-vPIP7K!{wn2p2 zDym<;$f1F=T~s-V-&HOf0M3r|zik!}+qxTNFS zem?0g$qH8)y-_`c$2HIp5kXiURghE)NRMTcd2rD2(#oS;*{p=9R} zjJ$|w8hY95i8U8iSKFNA#O^w=(%9=9YQ5PlihdTx9E8jKW`R9s<+hP9p4&*8>Pw!Zn55zQ zP0Yqd@BXuXUj4J|#Ml2-+FQlu5p-R;i7{r!%*@OjJ7#8PwqG+dbj-}m7&~TWW@ct) zW;>?IH*<6}zeeYbMkDn_NhQ@qx4KGu?N#fk(&*^+N@{QVe2)4k_{e!1dMC`(Z%8WI z;%zWp3y9h9;LUlJ$@Hq=c#_R7ucpyWDxs0dy1~fYj1e=I?k$Dj5gToa3>%N(7pJMo zMAby`iI~A6F0wvkeS*k%=pn}9r_k22kIlGV2@V8)jyHIJ$O$`po93l5$&X5Rs+-d< z66e$m4@y6K2jI9w6%8`E47Q%q6gxp6A;?S{_T5>#Cu8yGs+|cawp&f^eV~4rw=z)9 zyc@2Nr_5LCcI5g_EZ&t#z%s`gp!kqBJT{zjHJxmF5p+4>PBvj3W%!@*fb9-%cy)(& zGOz+ModfPx)3S>fx9>$hgFb>kn4yFr(j=x9lhTSPtdTrro;1cIic;vV(l_hg{(KTs zQ8AD)%%bAv5T3vwP9TMO)7(Z$P{&s_#0HZ11=xh1h{ceTag~!aQ=~$z^?m!CR(H1M z$0nd~?Qj>}ct@F2$&CAL^>znTs;PL{YG8flVzY*&T`8)}0|e{;a$6C{2C+1qrOW!54P>=8OgRC_6rhM#8#B6}OeSKnFpPC=@Q zzQ3WGV|{jP41XU?5-eh(PqSZV?+y?9nMoe_#Xxu&ROy@&MXKL4!zPCoktMooY^e%} zyW{-9=!5o}mg@1IHsI&Gd}<`Q;jX7~=(iY9MufqiOt?ddbYIVRk(3Je3PcY+^4pKx z{fa!vY1uI;22ec4>7U;NE~cbm?yp!da}QpHV^%{`l5q^#K7xkC#B+6a9d|Ql{QVlk z*ImcJ?JNT7NWQe1TOR+U9{8ZvD!b7cO*^1;^=OIQWQ$|x;WG1th0WY&2(A*-LB(6 zpf#z5wo>Z{=?nMSvBd%5;|~T|6ss+PsD;oU3>o_#mdOKEVa0~YxykEg{%Qv-L1Z&z z)FJT2edI`KKPe?tiB-R=R;~aRbu+30!5>%zJRBhR`Q`bYv;#qM#e)*^v?2O;jyd+` z;#7LULuY9PDf>iv7TRH>2%|?6g3p-nXG}u^JQ|_GHETr&%T}BZNe1+z!?Lk%ENohi zq81i;&gfXQXn$GF_4PxEjvrbn*=DXo%O0c5V&X&nIvLWA%wHeeWtafI6>N+w)>N;~ zh1!^h)P~RDpRE_4jjBH|2uFzW46(Yf{uFQ@BP{x1F=mMmD^FFC{B@vz zt$g^rOX#Gv#L%g0?C}CbHAnvpLzt`B&IUF8xIb=y4o&OlYj3&wXA#g-X+BCf@Eibd z+VU9c$Z9H(tQ<;EMnAlyI=(dtM25zm$TG#39}Mvo@g))Kz$2%e8=VeTNJGXU_>P0Y zAgX}UD~ri^5=EgbQb-NkZS9B~NkGmG6vshC_J}7${vghtlrg2+aJ*d1k|vBB4bpi? zAC+d5SCic0Nm|(HMqpT|6U9*-D1f6X;f@MLS|Pr|TkO=Lh@CW(wn9(-U0jmQE0PJ= z3Uf+*?S}?`(#hM2{6S>AWuYU`B2-jnlCw^GmC_Yw8UpB$RweS*2#J|qmqr~yf`O-R zyF6p2gN&_~j6ANUOt}_ezieRWOi-Jg6G#Z3SmMFe6_YDrnvSI0BD_4l6{2WaQZ=NI zX`)f{+6327*r={H)_XZWon6)Om3qzZO0VU(a?xYW*3QvX@08TX%sE?Rr#8Ql&{d&C z)oza5GXpn#8!l=0a&Zr-n>aY>D7ctMa?E+$!sVCM3R9Wi+4BmqqvnLi_=uTnE}yx| zt)fZU-5vT`HyE0DgL2SXzbN$|TF~t-v!Xo62`-|^GYb(5IZFA4n~!V0tWet(dS{XZ z6>=%5{g#N$FWfJLP|}ccJ}nLcA5y*`9Y3XE><3<*p^9d&ZZ5UmS@0@I$ReV5{4vk) zFILif=BP)$xryEOmB`{L+9;CnDMR($nG{wM%?5A2a2_OXxCjpEmS9!gP|po=z~2$d zQsHQ>#~uibt}p#ya#}1;b7G`gnbj9P+GH-b-wn=qOHAJ2pscfgpnvx{t~+~%70VIr z;Q(TE32otWj6qD*1Y*2z*R4X>RK`ApMbNq2TlRX++w=EUSJr@FUOINNo7 z&Qzdf-G_Ngja*W;de*hwbu)`Y zFOF-Gqwqh z@eitj_|(+YLV3A+lSWO3Xh4IYg{^|r$7cWZUYnG&_)2a)3#~p}$Et1BSNbpbNvDji zk^Hu*Z?C;2)>udzYP`pbeG{oBz4%aQEhR=+kf4c)k>+l4E-q@0v^S;VkxV@VGY0r7 zHdOH4oJQ^>7tl2M38e8zGDZ$$01XVo<>HQU}7FW z4q+ahq!EHj36j+C0XWTRl$wZ)k!=OBF`ur7Py705DnGSqx2r!wSU+r z?uvd)R;be71|+!}4=c!5N?(_?YUvQhgEM+X9#-eas(9x@cly$S>dAXq(i3!(XO1%% z-O|JZ9$)EIn|=Fq6?eDU!IYg~p}uANc{C<3SP7DCqF{g40fKK8LWp57R;AiH+u@;^ z`@r!VI-&Pt4{yl_q0o^U#}So-?X9!wLDN?ImV7O1(Uerwm>LPG(AdyKr|CQDlJzh? z=jQjML}YBv&&=DhN?;87ZOYnPt}4N92=B3#QuZq!NyeL{-S?E<`Ah#XtfyCckb_WBs_<^lYSFb>*~L( zJj>k?@V*}(RCWHH=hR=c8<5>ibr+5)7q?%7!>`;R14?1^a7hAXZPr1yY!;x|&>d2p zy!&`PCn=p#?ne)R?q^~&X(3Zju&-az!lc#=4Z2TK* zD(+KV1;Ls}9-NR}Jb3|wmbwI-@R)_iv0^D^@sxsDG?=mpgkJo0(SCF!ME1CUL4ReD zUa_w^pw5Z|)x-+UZA%oq#UCiguLgP_uvM$*RI}eB%Q~$WkKQk2GaXNef^*B1&c~mN zplB>gIJT)f)NS6QSUqVXe2mUobqKuQtwB7p81(T%oScIPoOm~PSSpb2F2W3sp?)uW-kp34qLv#he}X}R z^QaH&4u>hdr+@sPjHZ+?yvoHB`UHxRp5TP$_ILE>6Qmn9i>OHtmr<6DF9|txACHXY zs3YM>lC)-==n?Lvkq=8f}!HOb46chze zxWnz1u@|K4_b=q&Bbzrzbrr$6q1-nAGu3)nM9N}X`vh^lgJM7A3FxCnNAqEJ@2&b# zPpInSgS-z%AX%CHqSNndnKAM~U~oys_umI0l>ZE%Bomt+$B!5Mo@ou671_YBh?vA0 zKk{%mTmhl68ndvb%Z{VKG1Ey!Gl>BiB=pH-5~kFS$;=ChMk7IB9+@@plSSrZs1FH| zW7|fJGM&rrH5O^62<4_$`wfE)MeNB>J3g*I`3=0coV#zyy+@*l#Uu68qc#3HgqNRg zWBy)638Zy4nT-7KYx*&`%=|JQg=8$h4X(&?6K|S};2!o#qwjW)b$P8PkBkDCA@OH3 zvud8e$XqU_MTSm2>`NLbj`K*?ejy(x8KJ1#1WjY#dup0R@-7((F$PfcHdCcS6^J7Y zh@rs4rdu{i|sl4_On?(YPe50$n?dBS35V4jZ23> z$Je^M^-ZDiarM3k>?V>a9X!QNV-ivlx9O3`eL%l|dV6z14U4Sy?es(6_k3u&Gw9H> zOPuYe*psgqgi&gk&xVKwg=~L!NxT@lCWzl=4!(H`FBsQ9^`Mr-90UM)R&1Gy#8kiL zow(d<6KNGI`&jy6o02fSYl_b}vdX``#*3tjSFbGy6r?bsqKh9_F{%;JoZ+75ZsARn z99eP#Z)ksAKW8Ezxgl98XBzgu%;Ky1C*$aKnixmS5~vIG0rv+2Iu-G6-fL8f?p?{~8@e5Q;5WIY#aofcc z!+}%j#}p;GFGGTVf+UXAoD@ohTwwW4as2UpG&f`&#cadvraD85XV|PCEAdP~;d53} z%Q=d|=MwyvE_X+qsbZ5%F=eJLbROd;gygyu0^5Ll~AmF-taZWsNwoF#{N0@d6i#sX5S(aX7145O`pS_`kKK3TD|? zMXZDoLG{>xV?jFmDp+3u4?B-UI%=bSL4cvg{Ms8pYKU@*VGTD6f@0<%0euKCWLQ(< zOc0G?(1P$6&#cWY*aDceD{~*#85>J*qB^T(k?X&n(Kqr$nHe-?QGyB%^^?@z%VtXf zRjr3Asj?4Mi-j(liofc=RXYqs)-az4WHlX5^d1?tE1~><#aXaVBg&)CI-A`GDa}JiMgj zlGgC|5>(8$h6r zOC81;E@X&5z#gICi={-K-}wkED?jF%!H#z)i|$?_mhstWaKhB^=rKeNUf~VT{^3*9 zjzoz-mKRTh{3F~Jn;!k%KwRIe%G6Y`>i0GY(9;<+ zbG73t)KTvmM#P8Q<4pJ--F$t;!nr=Bk=D$vZ@1Zl(K%$kT9SS+eN;LU_Y}!}2M+1P zv(7IYX?Cl!ADKF14ZZ4?&Q!g@E~{_2K}#4gKsH9rk+^){i&9QlO&9a;t{3(s zW=%yXR41D?i2kbh##8zXR0W9;4#ZQpH&pHCPnikSim|vZWBc*e9=_65pi+1^ zdmPZS3A(0pVKFaj$yXekveEj`Ao9FHJk+;cqp@MjmQ1$v@Y>v-ERJ6J;7!I3tu+lu zn=Y%cz1se3f!8sHg;4Hp%X}59CubI^AFW}B%H8saKSNQN| zMIiOPj%Pv1^6}GD<;`*5k8hL}FstBNnBV;q!DiKD@p9TUNI+rKWg#J_{{g4(-hetW zom6m%)*w`XVW`>Z#vcCgc;!b~Eh<<`QEO<)+V}YNTuy%+O6yPn-Wag~5r5;7X8ktX9Ok9MjRe3W&hsM}u>HC7} zLjJ(e2pwPv|GpV1H#kJb{G{(`yPY}o#98Hvqhel=O@9`so*{Q9cxLg)zrvcMU&r-6 zZ{^`q!WBXvkVn}9T)q2!%(I^?OyU+_j)K3x&&HHC$Yn>LiR5(Y41p7r{rIO!f705> zYrUk9W_mVqsghQ?8rcFv9ghDhhcPr`8;omRN3?z6ua}Xq{7yBAL2l9Bfq2_b=b8$Y z4u=L-42}C%&R^k=04gj)vdHdN80bx}0Irn!ZTFSu4jFGG)YsP%+D&aG8SE>#k> zquQQk1-BkC`ZdR_q6qkk%e;T?b+tOz@S`*34M}ZL>4NKScyAzkozq?AW({3{`OVGK z9M1QnYuIG3PS@nyWtnx&(>S_fel}$=g5$?kbhS+8R~5ah&rvFHo0Z~$J%cUQlqUD1 zrA3b<=)^PxJyS;~WhY)WYSeUX?R0IcHQbDh#AH#IzQp*|$5o&&ae`ObjE(Vho84x# zkRovm{|65x93kM*_pi2!NIZ#x!&zNRJlW*1J5#cI4?HY#y)xRAJahbAfe}>Q>CmZk zWn-l?omhg_#IeYrbQ)XM_#GFMv&l0lIC`K#Hp=EgN>|v%%0`zLw!VRLv4Q#mp+Oe5 zkzPIhh&GB#O1i9RQ~Suq%F5P8*)roaN$b3;pivR$ApvZy&84e1t-KGO}^{ zk#cySuZ0JGV5OMm*ZOF>S@bBz=(t2YGt=swN+L7cKEI|FF=u7>u-M2rNWq}6Ia*$N=y{gDKzZE3XSrZq8If%>sI+%X$+MbncYqoh0h1>IZJGy@M6FuZ15Wush% z_10#0MOir(JRFc0)}z)n!k>&X$BnM?4`3%N5VG1spK6*N;+X8(FgR^i-Cd}dr^=k` zJi?x4Txe5KHIXzMu8s-~wEr*q2Xc{>iZ=$)qq^<6D&1TlB{WS0k z z@PII*SY{}VNfx_s$sZf__0jQ>(fOCV5SWGbh}HHnoLHOX#1dA@B-wNYKBw7v>d|?Y zgLO$%sp4@e)$ploJuRLV=h1N@{m7%&}i`RMpr?|HV$%yU@qAb&c5zt!GNt07_lKbj}Ze0bziZ@l!! zjJ2|l7*)ywR{^#Um_`fe1}|)MLEIU3{Y7n zHDcK5?!67R>QCwj(sG1h9n>~FJVdPzVewXwxR~Casu^m>p~@*+D(>xy@e|{TT+>_G z>Mj=rH-;K1uC&7&7a4En4>m&aBR>94_hJs_JdLTe)U5nNT%~zD?-%%kVnTqMJNqu# z*?p~_DV6@Ly+U>sUW>g0(n(v`2^h-vj9mB~wP|V@|2lBbP3Sor;Zg8mVuY7TT1r0a zp!+IGco(){sWtWaU>h_V0H)PA&C}~(xi(}UtjQXlL`UwT;wzZ06w6CPODRA(1j^~+_7WVQQOFcg~9ZN>g=G%U5Jsc2T3RI`I04j6`ZTzH> zlX|m@IH&s1c_8c5Sc{%zIwiDm&v2}gup5kQ>UXl)X|6c`PTGR>+o`-9yp%7D>>hU; zTN?P};rEh0^SW8CNXj2RyuRy=E}m00%X{duh_-uz(ABF~m9ZnXf;8u)a-bDUVHkmvSA%5r@I226g= z-c*h$rOnek_JzmorP)frQ31zj4~u>lgl}*J2OX-8#R+F`S0>b0JkjOSK;L|tgPF^O z$L)7L%qz7Idxbk}0zmWj5RJ=*;cchv0{ltc3NQfM^B7GJro|Heozl~Zhxm=%xyAN!!p}~CDQ@F~JqAwC+-Cb-}5s3|n zDfSF&JXLIXBuG9HH7iuBFtapKg}f>h_T1o(#%}USETY0YOm!(+C;U_mGXN5Dq#O;6 zY%1RQ;A@qX!L>vbp0P!o*~Q{lk3on2h7EK2s>vYcJzIxPhuYO{pLf?rMf5IxOPehJ z|Mkss{3mm(&Qns+EI|N*y6(Qp^eY%MSAup85jfYstpB`GOhE3xlL5pm9G#qrxc=)G z7*LayU;ByN@uWWf9Z~b@{7LFtSPYmVr0CDFuLlMwoDJ#AamIzWSG;Frn?b?JB&t+l z##iI#UTv?NWXFC1+eF_X{GEs(h+vW9H}4gc$rnQlw6oO+RomI%Ga%gd$Q~(<~ z37q(3V0kUzytafAX)oU|>x|GeTbCfGdwp4VdF!fI{6ToZKk!_cTeBA*nIeR?`?9p{ z>|1Tdkf@;Y!JJVGH6LHddA{x83w*7E9Aw6gw`S%%o4M-CPRPCM-H$$nUUIQ_* zLg=gKVa!5vcH#nB?M~sJ)mk6COhC}}-G0k8_G{{#`ulGh>Y6Y0x(Dfs(Dy9YGuO|5 z9ab>z3?C|jWA9ti&}4tDt#n@qxvI*HnXd8mPWBXbGP;6)7%|fxCJKUHUpHhpA5N~H z`SeAK>T9*b-ROF8G8|_XWx2|ujk>%Xlc|&z0>c2$r6$rxgg|MZ~(!HdyP=%1j@OOF&jm|_r zG>E_T#J8*DqV+*>CwUfkKPL3=Cpwmg9akPR4-}9`a#JHI^{0;Q&!^msh6dzc60yXD7voYxMkt9iVIKm$2VimXg-d3SLhT5i9FWa1^5vtDAMlY8gHfH3C`8DX zzHIE-=%`YQqI@uXS|1Yc5p@fbn45Jc+*3Sc#<`kxjY}tbY_d@$R)53>2Bprk@A>NI z+3`Q8%utd&c1n)m3juZ+Y;?8pu);hRmV|$GLxYDIMZ>w+XPU*)h|Li`1p7uruX0vC}o!((!p@=l0p*^_QIO;tr|n)B7o= zYyI=++KL@<>htjc=;Hl20}ek_Q5UVA3&#Bf>!KfVlc&sQ-j2_#+LrU=6z>0fmMd~b7|uzlXo zQ_i0cDQ$-hBk2@@uT@MniBCBF81p}gGuR-#r>r{`-18yf+h;5uhWI*I)gT- z{AAt$+_ec0xfuVufaG|TLj{gZ@l}!&>ZiV=`-QmUME#1*lst53@CjTVWGYBq2TavW z>G1f=d}7Tj^ExC=*m4CtQOG}rNVm`hW_7Tyx8n-O%B|1tmt6mAZK8TH)PPlY9x>Ij zqQ*E2aIf93y^;8XjX3fboie8PaJ@{Ig+jdwz@SPfVrYwFpiSsxd`BQXNjUv2~) zn=;nTf+6*wizx|~Lo0!{DBf?S?-aYmtk(uNdK(Zg_DZ=l&l6T`bBM`2&QOI7zFp{X zbR$C&UlxVrmr3fee~1^1Pdjn6H}eh`^t~=k1vLI}tnBTvTEGTjAmu+BAYwQk$UGuw+n1TM%9W3{$AB*8D+G zrI)Td%8bz<{|wp4n+BLtX8G%Cu1~Y11b|Oo$V~t7)8*gT4n?V=rbn-65Crv z!;Dto<&TZ{@lq@HsJW)Bze$a#kW@ruj=%Ho1#@(m5>rl3)YwgkLg&5)7>NXApj~VF z0Y(Cq47E#j1Hl~nwiBXIybv|G73YY_v}R|6WVWl+|Ngm|1(F)mQ8`u$RtZ8-NWJkv zr{NuszS2{kVXT>2LR~#}c9-Vf$L+YOlkhMv_bSD79j%a1Sx`&h&$ecEk}7pRF58@& zBu)BKfQKy+9yF)H(T~}%()NBdfJY z^SoS!=5$!pa2V<<^1uwfgBcXf-Q12@y#$-^s*wXlH?lCg&W)O4PZF8?-jxQV5Qn1- z+SYt#~@)vL0z(MEe@Zfe+ES|J=6!VaP|aQ@9()#&d{)$+7;F?O3wdL0e%8j>~0qVm*iI5L%?-#T;M5oRw}ca*ORER2PquaKZM zc7}+?rBr^cp5bR}IQ2Lhc=AOiRLy9qIN74jo@@~q+`I|Nem{#rJgB+8FEHamX}lx9 zgQ7--Wc&7?B{N(>02TipDD|YP#FI^sgIQ{pi6D))irkt^a~ArF5acn+sdJtZ9DxNT zGJy)S$BI&RgY5_}qi*!F@{=ORhlG7-xE@|g)A-3Rpn9p&~P!4gwxGMR`epqYrz5g{zb}kRyd7I%Rv-=s{1N zl_`@>rHR>OXj!{P6s} zd`NGpQ&@Q!{}lg6ryI>!=|} zUTW7C$7H9?5ycra4E?PqgFb6S@rGyU@Qu>`e9Nrgz+P(B^A^Pn>s+y_5s<*3=VN0( z{*Zw-)2p};qkC?Q7AuGKN-18;SI( zWZ6kX)jEoC3WlhN+uA&qRFvfYaF|T0u2+97TV>+y-|q4g=^J+$qw*ki`JgaXY2!#D z5dz!<(BgO})HSbgn&m4IYfOP(rCxglo`HNU`jW=@jQwi9Y;uCsB;s@*9hKs?K2nyP zg6+hZ4RIgFfK~zz+fKqFfy>T}st#WDatk&;5-c*C>@Ib0&~3U-uh1%x0ZU~*rvh*8 z5EXw*M`#wPNO3QbD-}!Lpwp5+Zm<0J_xMdA@K+g}2p38AhZ$*a&?t3)Zx0B)2ky{! z|L)-3k>Y_V%Qu|VOJxU=os(sch`gTscgpOZ;dng}eE)QMCZ7TZQ#G3w?TM5f^$)eB2Unt4WzcSQCzB5cPeRDmv@co5@ z4fNlk?EANKe8pTYND_YuwoCt$moCfzrsG*r*|&T?n6q(pFVfy~5&ApQi2~mtfgUzl zMZsmYZakW!(bA`aZwxK;M@^=)L7D<%hlrjUS7)13*@FR^W)ey&>Jo%mpYtH7kLv0# z3G17Pn*{$!^M!m;Kc{yB6N2j6gGXbrAz;Xzd)g(X*!1k|&X&ASTO7@X z+k5HDDMu(cfuJ1LQ7J+i<&uuM*F#N zDE_J0P|@sT>p)bdRa;ji{!4X)2v9>XxuiL{;^@?@*>!`nqRpN#g^H}0`nNBckF|}9 z&IOu9KEk656HzCH9QN$3O`m~DU#u_L@XJ3+ z68aaz`{Ce>-UtpfQ#F>hBzKsYO=Q~*e@am}w-IhO4h$Wr2T`MPxGQOnWIz>8;_#*a zRmZ&5fm)@S3#XFYz`$`Z#!hUKe{ZMu{MT<>TLT$!&Xj-NK5XU0^(F)rv_EJUFc?Zi!WU{gKn3k3t*xS>$O^T&fn)_{`Yd5367Pp2Ga(Q}l z-fvJF6Kv>$CCjSLZA0jfp@2ob>LdY?zq1QhqjSMj;4PQaR(FcJalOl`!Wwd)MI_0^ zeWw<*p_gy-5ZN!(zII85Jb!xm@ykrvl5FEbe`-LljpwDmxys>HSZ{`|T4OAJbJtt* zz-m8sH13+qz9%(4*sI;o3d@No1O;Ul{8FljdI#vx=_k@cFJnL!^mXVF9 z3evQZZkU<#zOMElLswzsrySLrwjdCgsk|s_ncR4yt4|)CfRhwt6(+n={*y)wzci4` zB`|Wtdp0XRia#w2X9B#^PSKZdapPO6WEwIYDDI+}T%*=Xeue)!d9TXb2Z`!os&xrk zIwDT4of~fY@-Yei?E5IyT>QJAnmt`OCoCuT01O&&gqONTm6H!dtIp17kjaGb;#Fcn z%`trKSTH z)8~0(M5`sENPm_LZG2L#_sgT116paMoZQq|b5}cXys^G0jU`0EBB02B$S#%6amoE*0D9!#^`~3D=sFw+D=cT9+XwBG!mO;sv^vAvY+IGxuyN|q+&Fvbl8C-r?;E} zi01_;rA0jDBIxN~CMF_$vLG-8iZvCBd;B+2#PAV$H#x>td+-bwUa*vqzODj%Q!QO|hxP);{qCJmdIW7Jn_M zWJvA}Y>yJK9B>w$IuYqgP049Xlcd%b1S&@;HWMNFWQF1oOi56fW=^dYjIgKYzV;p5pi$hI)i6;3> zOI!@~+CYwG#dp`npzWiBXIB<{LZ|0}ZwdU<`Q-;d89t_Z0Sp>-wIQ_l!b?F_5AYJm zccIx*vT$()PabVIlIMLkn-O@{rN8of61GLAhHoF51WF_GZ}6==GQK6sHwT6OrT?vj zta~#V#~U4-;GYs5ZTs(3ujQeiU&619m87@9^~AVkwKH6gO#{KvMPQQ}8N^TV>E*jj z26Fs)6L(okMubN zS9-sE9eQ@qtm|NM&>lILiHh%3mFciAFwm4poLTOL{3%h!Jy|A9nB610sOG9i#h;Xc z86OVw3J;_+-L&y>*;7~?;fRJb{G*cFCf%^3+kfp?RFEo z`kRJs;6_CIUFK-?tduFjS#VRnqB5|JJ36HqlL_3~0k3I9wW$t|F+N}Cr&9K$CNqHH zr@JCwBs`&z{9g`K7uB@nxueO8PVKjB6^Kl)CnBVkQEMqhIXH@~lGsG+YmG zT-b1>qOhlVYbntm>tW0qrGA8uC}1!Oo#k>&0l&c{sS;(M%9k*goq&qEE?26IUi4-n z6vLG|{c73P9Dzu6pQ0L)P^ze?+{Iacb$iBnqZ)IU^u@7v8STsO-=}?yNg|x(D$-gJ zn@#Ay+ag}Mf4tz!YJ}R*ks6ZgLL&tM`*Vc>&2(XOCq~Wx+Bs`hTIU6JW)lwhuoLNu z3nrs%tE*+2C$L!;y+tmr8fdFwiD`Ovg>@vhMLdnp`6evj>9S%oJaI>d0p;>qR(sv> zNmr(bWysYV%kT>@Z+0|>#Ev`n2C(sF`zGe8&idWBS(9aRe#V26zwpS|vrLQu|1;0~ zd8362@etWv?YTcRT4K}FRsH2_WSs~7w@7D;Hh0QfJ7C*nB!e}A1@gN3$i-#*QSC!< zd+vBZg3s1hLmluIs{~N$ZJtTcAU2`fSDrv7HII1}ZAMic2b3o?RR~kn6mA*pynga^ z54&R0ls;M=dqomb>C{lUZo6bPs(eoUY2FiYx*udgD&q6*Eq)#j~N9l^3OY1!(#UUD#Yw`cDmetJW0#hjTW@(6^FyhzRwe77jl zcy!3~o>I4fw%VIC3-{t2P{<4$nd?DGXQR`v4c_3=*^D=u+5G($Zn=)+{Vjcjk)0o5 z#s*!KM1Vy%M_xldD9_dC1uweWc zQ3SP7SDNP*)6W)cyA^tz6KypttfbaC8go;=7OJ?b2Z2+MLT)e!8E43;f75lKkEM<9 zj2&qDT8fnvj)(kCa+PDewt+4jxWm7nv#>9=fz98}@ia ze!)`e$B`6LIa#@dMWGCjJU42DA~EkTKNjc?sPJmp?YHiH2*Ux&)vrsxNHjc$b8Bx& z9B}>GPngh?2qtmksgTsJi#rBi1Z|*_;aRB5bxpk`T0O*CouBty{zMxgugTdgp^!F# zK7tptzn<&36StlKAnvtBT){Td{+~V<0+CuBi`-s~2sMrYN+4IoSWe+W)7h>;DhZHUEEj&HodE+UUzy3bEk2JFHYJEG#=e li8y# ## 4. Run Simple Example Workflows ``` -argo submit --watch https://raw.githubusercontent.com/argoproj/argo/master/examples/hello-world.yaml -argo submit --watch https://raw.githubusercontent.com/argoproj/argo/master/examples/coinflip.yaml -argo submit --watch https://raw.githubusercontent.com/argoproj/argo/master/examples/loops-maps.yaml +argo submit --watch https://raw.githubusercontent.com/cyrusbiotechnology/argo/master/examples/hello-world.yaml +argo submit --watch https://raw.githubusercontent.com/cyrusbiotechnology/argo/master/examples/coinflip.yaml +argo submit --watch https://raw.githubusercontent.com/cyrusbiotechnology/argo/master/examples/loops-maps.yaml argo list argo get xxx-workflow-name-xxx argo logs xxx-pod-name-xxx #from get command above @@ -60,33 +60,36 @@ You can also create workflows directly with kubectl. However, the Argo CLI offer that kubectl does not, such as YAML validation, workflow visualization, parameter passing, retries and resubmits, suspend and resume, and more. ``` -kubectl create -f https://raw.githubusercontent.com/argoproj/argo/master/examples/hello-world.yaml +kubectl create -f https://raw.githubusercontent.com/cyrusbiotechnology/argo/master/examples/hello-world.yaml kubectl get wf kubectl get wf hello-world-xxx kubectl get po --selector=workflows.argoproj.io/workflow=hello-world-xxx --show-all kubectl logs hello-world-yyy -c main ``` -Additional examples are available [here](https://github.com/argoproj/argo/blob/master/examples/README.md). +Additional examples are available [here](https://github.com/cyrusbiotechnology/argo/blob/master/examples/README.md). ## 5. Install an Artifact Repository Argo supports S3 (AWS, GCS, Minio) as well as Artifactory as artifact repositories. This tutorial uses Minio for the sake of portability. Instructions on how to configure other artifact repositories -are [here](https://github.com/argoproj/argo/blob/master/ARTIFACT_REPO.md). +are [here](https://github.com/cyrusbiotechnology/argo/blob/master/ARTIFACT_REPO.md). ``` -brew install kubernetes-helm # mac -helm init -helm install stable/minio --name argo-artifacts --set service.type=LoadBalancer --set persistence.enabled=false +helm install stable/minio \ + --name argo-artifacts \ + --set service.type=LoadBalancer \ + --set defaultBucket.enabled=true \ + --set defaultBucket.name=my-bucket \ + --set persistence.enabled=false ``` Login to the Minio UI using a web browser (port 9000) after exposing obtaining the external IP using `kubectl`. ``` -kubectl get service argo-artifacts-minio -o wide +kubectl get service argo-artifacts -o wide ``` On Minikube: ``` -minikube service --url argo-artifacts-minio +minikube service --url argo-artifacts ``` NOTE: When minio is installed via Helm, it uses the following hard-wired default credentials, @@ -98,8 +101,8 @@ Create a bucket named `my-bucket` from the Minio UI. ## 6. Reconfigure the workflow controller to use the Minio artifact repository -Edit the workflow-controller config map to reference the service name (argo-artifacts-minio) and -secret (argo-artifacts-minio) created by the helm install: +Edit the workflow-controller config map to reference the service name (argo-artifacts) and +secret (argo-artifacts) created by the helm install: ``` kubectl edit cm -n argo workflow-controller-configmap ... @@ -108,18 +111,18 @@ data: artifactRepository: s3: bucket: my-bucket - endpoint: argo-artifacts-minio.default:9000 + endpoint: argo-artifacts.default:9000 insecure: true # accessKeySecret and secretKeySecret are secret selectors. - # It references the k8s secret named 'argo-artifacts-minio' + # It references the k8s secret named 'argo-artifacts' # which was created during the minio helm install. The keys, # 'accesskey' and 'secretkey', inside that secret are where the # actual minio credentials are stored. accessKeySecret: - name: argo-artifacts-minio + name: argo-artifacts key: accesskey secretKeySecret: - name: argo-artifacts-minio + name: argo-artifacts key: secretkey ``` @@ -129,7 +132,7 @@ namespace you use for workflows. ## 7. Run a workflow which uses artifacts ``` -argo submit https://raw.githubusercontent.com/argoproj/argo/master/examples/artifact-passing.yaml +argo submit https://raw.githubusercontent.com/cyrusbiotechnology/argo/master/examples/artifact-passing.yaml ``` ## 8. Access the Argo UI diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 000000000000..d0f3d87ad6ed --- /dev/null +++ b/docs/README.md @@ -0,0 +1,9 @@ +# Argo Documentation + +## [Getting Started](../demo.md) + +## Features +* [Controller Configuration](workflow-controller-configmap.yaml) +* [RBAC](workflow-rbac.md) +* [REST API](rest-api.md) +* [Workflow Variables](variables.md) diff --git a/docs/example-golang/main.go b/docs/example-golang/main.go new file mode 100644 index 000000000000..fb790e3076c0 --- /dev/null +++ b/docs/example-golang/main.go @@ -0,0 +1,77 @@ +package main + +import ( + "flag" + "fmt" + "os" + "path/filepath" + + "github.com/argoproj/pkg/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + wfclientset "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/fields" + "k8s.io/client-go/tools/clientcmd" +) + +var ( + helloWorldWorkflow = wfv1.Workflow{ + ObjectMeta: metav1.ObjectMeta{ + GenerateName: "hello-world-", + }, + Spec: wfv1.WorkflowSpec{ + Entrypoint: "whalesay", + Templates: []wfv1.Template{ + { + Name: "whalesay", + Container: &corev1.Container{ + Image: "docker/whalesay:latest", + Command: []string{"cowsay", "hello world"}, + }, + }, + }, + }, + } +) + +func main() { + // use the current context in kubeconfig + kubeconfig := flag.String("kubeconfig", filepath.Join(os.Getenv("HOME"), ".kube", "config"), "(optional) absolute path to the kubeconfig file") + flag.Parse() + + // use the current context in kubeconfig + config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig) + checkErr(err) + namespace := "default" + + // create the workflow client + wfClient := wfclientset.NewForConfigOrDie(config).ArgoprojV1alpha1().Workflows(namespace) + + // submit the hello world workflow + createdWf, err := wfClient.Create(&helloWorldWorkflow) + checkErr(err) + fmt.Printf("Workflow %s submitted\n", createdWf.Name) + + // wait for the workflow to complete + fieldSelector := fields.ParseSelectorOrDie(fmt.Sprintf("metadata.name=%s", createdWf.Name)) + watchIf, err := wfClient.Watch(metav1.ListOptions{FieldSelector: fieldSelector.String()}) + errors.CheckError(err) + defer watchIf.Stop() + for next := range watchIf.ResultChan() { + wf, ok := next.Object.(*wfv1.Workflow) + if !ok { + continue + } + if !wf.Status.FinishedAt.IsZero() { + fmt.Printf("Workflow %s %s at %v\n", wf.Name, wf.Status.Phase, wf.Status.FinishedAt) + break + } + } +} + +func checkErr(err error) { + if err != nil { + panic(err.Error()) + } +} diff --git a/docs/releasing.md b/docs/releasing.md new file mode 100644 index 000000000000..4a71834f56de --- /dev/null +++ b/docs/releasing.md @@ -0,0 +1,38 @@ +# Release Instructions + +1. Update CHANGELOG.md with changes in the release + +2. Update VERSION with new tag + +3. Update codegen, manifests with new tag + +``` +make codegen manifests IMAGE_NAMESPACE=argoproj IMAGE_TAG=vX.Y.Z +``` + +4. Commit VERSION and manifest changes + +5. git tag the release + +``` +git tag vX.Y.Z +``` + +6. Build the release + +``` +make release IMAGE_NAMESPACE=argoproj IMAGE_TAG=vX.Y.Z +``` + +7. If successful, publish the release: +``` +export ARGO_RELEASE=vX.Y.Z +docker push argoproj/workflow-controller:${ARGO_RELEASE} +docker push argoproj/argoexec:${ARGO_RELEASE} +docker push argoproj/argocli:${ARGO_RELEASE} +git push upstream ${ARGO_RELEASE} +``` + +8. Draft GitHub release with the content from CHANGELOG.md, and CLI binaries produced in the `dist` directory + +* https://github.com/argoproj/argo/releases/new diff --git a/docs/rest-api.md b/docs/rest-api.md new file mode 100644 index 000000000000..c88a5698de7c --- /dev/null +++ b/docs/rest-api.md @@ -0,0 +1,40 @@ +# REST API + +Argo is implemented as a kubernetes controller and Workflow [Custom Resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/). +Argo itself does not run an API server, and with all CRDs, it extends the Kubernetes API server by +introducing a new API Group/Version (argorproj.io/v1alpha1) and Kind (Workflow). When CRDs are +registered in a cluster, access to those resources are made available by exposing new endpoints in +the kubernetes API server. For example, to list workflows in the default namespace, a client would +make an HTTP GET request to: `https:///apis/argoproj.io/v1alpha1/namespaces/default/workflows` + +> NOTE: the optional argo-ui does run a thin API layer to power the UI, but is not intended for + programatic interaction. + +A common scenario is to programatically submit and retrieve workflows. To do this, you would use the +existing Kubernetes REST client in the language of preference, which often libraries for performing +CRUD operation on custom resource objects. + +## Examples + +### Golang + +A kubernetes Workflow clientset library is auto-generated under [argoproj/argo/pkg/client](https://github.com/argoproj/argo/tree/master/pkg/client) and can be imported by golang +applications. See the [golang code example](example-golang/main.go) on how to make use of this client. + +### Python +The python kubernetes client has libraries for interacting with custom objects. See: https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md + + +### Java +The Java kubernetes client has libraries for interacting with custom objects. See: +https://github.com/kubernetes-client/java/blob/master/kubernetes/docs/CustomObjectsApi.md + +### Ruby +The Ruby kubernetes client has libraries for interacting with custom objects. See: +https://github.com/kubernetes-client/ruby/tree/master/kubernetes +See this [external Ruby example](https://github.com/fischerjulian/argo_workflows_ruby_example) on how to make use of this client. + +## OpenAPI + +An OpenAPI Spec is generated under [argoproj/argo/api/openapi-spec](https://github.com/argoproj/argo/blob/master/api/openapi-spec/swagger.json). This spec may be +used to auto-generate concrete datastructures in other languages. diff --git a/docs/variables.md b/docs/variables.md index cabf49084437..95e078135e17 100644 --- a/docs/variables.md +++ b/docs/variables.md @@ -28,6 +28,9 @@ The following variables are made available to reference various metadata of a wo | Variable | Description| |----------|------------| | `pod.name` | Pod name of the container/script | +| `inputs.artifacts..path` | Local path of the input artifact | +| `outputs.artifacts..path` | Local path of the output artifact | +| `outputs.parameters..path` | Local path of the output parameter | ## Loops (withItems / withParam) | Variable | Description| @@ -43,6 +46,8 @@ The following variables are made available to reference various metadata of a wo | `workflow.uid` | Workflow UID. Useful for setting ownership reference to a resource, or a unique artifact location | | `workflow.parameters.` | Input parameter to the workflow | | `workflow.outputs.parameters.` | Input artifact to the workflow | +| `workflow.annotations.` | Workflow annotations | +| `workflow.labels.` | Workflow labels | | `workflow.creationTimestamp` | Workflow creation timestamp formatted in RFC 3339 (e.g. `2018-08-23T05:42:49Z`) | | `workflow.creationTimestamp.` | Creation timestamp formatted with a [strftime](http://strftime.org) format character | diff --git a/docs/workflow-controller-configmap.yaml b/docs/workflow-controller-configmap.yaml index de5a64893096..5f2e6bd2ba52 100644 --- a/docs/workflow-controller-configmap.yaml +++ b/docs/workflow-controller-configmap.yaml @@ -15,13 +15,25 @@ data: instanceID: my-ci-controller # namespace limits the controller's watch/queries to a specific namespace. This allows the - # controller to run with namespace scope (role), instead of cluster scope (clusterrole). + # controller to run with namespace scope (Role), instead of cluster scope (ClusterRole). namespace: argo # Parallelism limits the max total parallel workflows that can execute at the same time - + # (available since Argo v2.3) parallelism: 10 + # uncomment flowing lines if workflow controller runs in a different k8s cluster with the + # workflow workloads, or needs to communicate with the k8s apiserver using an out-of-cluster + # kubeconfig secret + # kubeConfig: + # # name of the kubeconfig secret, may not be empty when kubeConfig specified + # secretName: kubeconfig-secret + # # key of the kubeconfig secret, may not be empty when kubeConfig specified + # secretKey: kubeconfig + # # mounting path of the kubeconfig secret, default to /kube/config + # mountPath: /kubeconfig/mount/path + # # volume name when mounting the secret, default to kubeconfig + # volumeName: kube-config-volume # artifactRepository defines the default location to be used as the artifact repository for # container artifacts. @@ -37,6 +49,8 @@ data: endpoint: s3.amazonaws.com bucket: my-bucket region: us-west-2 + # insecure will disable TLS. Primarily used for minio installs not configured with TLS + insecure: false # keyFormat is a format pattern to define how artifacts will be organized in a bucket. # It can reference workflow metadata variables such as workflow.namespace, workflow.name, # pod.name. Can also use strftime formating of workflow.creationTimestamp so that workflow @@ -51,10 +65,8 @@ data: /{{workflow.creationTimestamp.d}}\ /{{workflow.name}}\ /{{pod.name}}" - # insecure will disable TLS. used for minio installs not configured with TLS - insecure: false # The actual secret object (in this example my-s3-credentials), should be created in every - # namespace which a workflow wants which wants to store its artifacts to S3. If omitted, + # namespace where a workflow needs to store its artifacts to S3. If omitted, # attempts to use IAM role to access the bucket (instead of accessKey/secretKey). accessKeySecret: name: my-s3-credentials @@ -64,24 +76,36 @@ data: key: secretKey # Specifies the container runtime interface to use (default: docker) + # must be one of: docker, kubelet, k8sapi, pns containerRuntimeExecutor: docker # kubelet port when using kubelet executor (default: 10250) kubeletPort: 10250 - # disable the TLS verification of the kubelet executo (default: false) + # disable the TLS verification of the kubelet executor (default: false) kubeletInsecure: false - # executorResources specifies the resource requirements that will be used for the executor - # sidecar/init container. This is useful in clusters which require resources to be specified as - # part of admission control. - executorResources: - requests: - cpu: 0.1 - memory: 64Mi - limits: - cpu: 0.5 - memory: 512Mi + # executor controls how the init and wait container should be customized + # (available since Argo v2.3) + executor: + imagePullPolicy: IfNotPresent + resources: + requests: + cpu: 0.1 + memory: 64Mi + limits: + cpu: 0.5 + memory: 512Mi + # args & env allows command line arguments and environment variables to be appended to the + # executor container and is mainly used for development/debugging purposes. + args: + - --loglevel + - debug + - --gloglevel + - "6" + env: + - name: SOME_ENV_VAR + value: "1" # metricsConfig controls the path and port for prometheus metrics metricsConfig: diff --git a/errors/errors.go b/errors/errors.go index 22177ccaa4f1..1fbe662557a7 100644 --- a/errors/errors.go +++ b/errors/errors.go @@ -14,7 +14,7 @@ const ( CodeBadRequest = "ERR_BAD_REQUEST" CodeForbidden = "ERR_FORBIDDEN" CodeNotFound = "ERR_NOT_FOUND" - CodeNotImplemented = "ERR_NOT_INPLEMENTED" + CodeNotImplemented = "ERR_NOT_IMPLEMENTED" CodeTimeout = "ERR_TIMEOUT" CodeInternal = "ERR_INTERNAL" ) diff --git a/errors/errors_test.go b/errors/errors_test.go index c787b52b8bc7..b70ef9e104d0 100644 --- a/errors/errors_test.go +++ b/errors/errors_test.go @@ -4,7 +4,7 @@ import ( "fmt" "testing" - "github.com/argoproj/argo/errors" + "github.com/cyrusbiotechnology/argo/errors" pkgerr "github.com/pkg/errors" "github.com/stretchr/testify/assert" ) diff --git a/examples/README.md b/examples/README.md index 4d476b832cf5..302adeb0839e 100644 --- a/examples/README.md +++ b/examples/README.md @@ -4,13 +4,13 @@ Argo is an open source project that provides container-native workflows for Kubernetes. Each step in an Argo workflow is defined as a container. -Argo is implemented as a Kubernetes CRD (Custom Resource Definition). As a result, Argo workflows can be managed using kubectl and natively integrates with other Kubernetes services such as volumes, secrets, and RBAC. The new Argo software is lightweight and installs in under a minute but provides complete workflow features including parameter substitution, artifacts, fixtures, loops and recursive workflows. +Argo is implemented as a Kubernetes CRD (Custom Resource Definition). As a result, Argo workflows can be managed using `kubectl` and natively integrates with other Kubernetes services such as volumes, secrets, and RBAC. The new Argo software is light-weight and installs in under a minute, and provides complete workflow features including parameter substitution, artifacts, fixtures, loops and recursive workflows. -Many of the Argo examples used in this walkthrough are available at https://github.com/argoproj/argo/tree/master/examples. If you like this project, please give us a star! +Many of the Argo examples used in this walkthrough are available at https://github.com/cyrusbiotechnology/argo/tree/master/examples. If you like this project, please give us a star! For a complete description of the Argo workflow spec, please refer to https://github.com/argoproj/argo/blob/master/pkg/apis/workflow/v1alpha1/types.go -## Table of Content +## Table of Contents - [Argo CLI](#argo-cli) - [Hello World!](#hello-world) @@ -32,14 +32,15 @@ For a complete description of the Argo workflow spec, please refer to https://gi - [Sidecars](#sidecars) - [Hardwired Artifacts](#hardwired-artifacts) - [Kubernetes Resources](#kubernetes-resources) -- [Docker-in-Docker (aka. DinD) Using Sidecars](#docker-in-docker-aka-dind-using-sidecars) -- [Continuous integration example](#continuous-integration-example) +- [Docker-in-Docker Using Sidecars](#docker-in-docker-using-sidecars) +- [Custom Template Variable Reference](#custom-template-variable-reference) +- [Continuous Integration Example](#continuous-integration-example) ## Argo CLI -In case you want to follow along with this walkthrough, here's a quick overview of the most useful argo CLI commands. +In case you want to follow along with this walkthrough, here's a quick overview of the most useful argo command line interface (CLI) commands. -[Install Argo here](https://github.com/argoproj/argo/blob/master/demo.md) +[Install Argo here](https://github.com/cyrusbiotechnology/argo/blob/master/demo.md) ```sh argo submit hello-world.yaml # submit a workflow spec to Kubernetes @@ -50,24 +51,26 @@ argo logs hello-world-xxx-yyy # get logs from a specific step in a workflow argo delete hello-world-xxx # delete workflow ``` -You can also run workflow specs directly using kubectl but the argo CLI provides syntax checking, nicer output, and requires less typing. +You can also run workflow specs directly using `kubectl` but the Argo CLI provides syntax checking, nicer output, and requires less typing. + ```sh kubectl create -f hello-world.yaml kubectl get wf kubectl get wf hello-world-xxx -kubectl get po --selector=workflows.argoproj.io/workflow=hello-world-xxx --show-all #similar to argo +kubectl get po --selector=workflows.argoproj.io/workflow=hello-world-xxx --show-all # similar to argo kubectl logs hello-world-xxx-yyy -c main kubectl delete wf hello-world-xxx ``` ## Hello World! -Let's start by creating a very simple workflow template to echo "hello world" using the docker/whalesay container image from DockerHub. +Let's start by creating a very simple workflow template to echo "hello world" using the docker/whalesay container image from DockerHub. -You can run this directly from your shell with a simple docker command. -``` +You can run this directly from your shell with a simple docker command: + +```sh bash% docker run docker/whalesay cowsay "hello world" _____________ < hello world > @@ -90,32 +93,33 @@ This message shows that your installation appears to be working correctly. ``` Below, we run the same container on a Kubernetes cluster using an Argo workflow template. -Be sure to read the comments. They provide useful explanations. +Be sure to read the comments as they provide useful explanations. + ```yaml apiVersion: argoproj.io/v1alpha1 -kind: Workflow #new type of k8s spec +kind: Workflow # new type of k8s spec metadata: - generateName: hello-world- #name of workflow spec + generateName: hello-world- # name of the workflow spec spec: - entrypoint: whalesay #invoke the whalesay template + entrypoint: whalesay # invoke the whalesay template templates: - - name: whalesay #name of template + - name: whalesay # name of the template container: image: docker/whalesay command: [cowsay] args: ["hello world"] - resources: #don't use too much resources + resources: # limit the resources limits: memory: 32Mi cpu: 100m ``` -Argo adds a new `kind` of Kubernetes spec called a `Workflow`. -The above spec contains a single `template` called `whalesay` which runs the `docker/whalesay` container and invokes `cowsay "hello world"`. -The `whalesay` template is denoted as the `entrypoint` for the spec. The entrypoint specifies the initial template that should be invoked when the workflow spec is executed by Kubernetes. Being able to specify the entrypoint is more useful when there are more than one template defined in the Kubernetes workflow spec :-) + +Argo adds a new `kind` of Kubernetes spec called a `Workflow`. The above spec contains a single `template` called `whalesay` which runs the `docker/whalesay` container and invokes `cowsay "hello world"`. The `whalesay` template is the `entrypoint` for the spec. The entrypoint specifies the initial template that should be invoked when the workflow spec is executed by Kubernetes. Being able to specify the entrypoint is more useful when there is more than one template defined in the Kubernetes workflow spec. :-) ## Parameters Let's look at a slightly more complex workflow spec with parameters. + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -135,28 +139,43 @@ spec: - name: whalesay inputs: parameters: - - name: message #parameter declaration + - name: message # parameter declaration container: # run cowsay with that message input parameter as args image: docker/whalesay command: [cowsay] args: ["{{inputs.parameters.message}}"] ``` -This time, the `whalesay` template takes an input parameter named `message` which is passed as the `args` to the `cowsay` command. In order to reference parameters (e.g. "{{inputs.parameters.message}}"), the parameters must be enclosed in double quotes to escape the curly braces in YAML. + +This time, the `whalesay` template takes an input parameter named `message` that is passed as the `args` to the `cowsay` command. In order to reference parameters (e.g., ``"{{inputs.parameters.message}}"``), the parameters must be enclosed in double quotes to escape the curly braces in YAML. The argo CLI provides a convenient way to override parameters used to invoke the entrypoint. For example, the following command would bind the `message` parameter to "goodbye world" instead of the default "hello world". + ```sh argo submit arguments-parameters.yaml -p message="goodbye world" ``` -Command line parameters can also be used to override the default entrypoint and invoke any template in the workflow spec. For example, if you add a new version of the `whalesay` template called `whalesay-caps` but you don't want to change the default entrypoint, you can invoke this from the command line as follows. +In case of multiple parameters that can be overriten, the argo CLI provides a command to load parameters files in YAML or JSON format. Here is an example of that kind of parameter file: + +```yaml +message: goodbye world +``` + +To run use following command: + +```sh +argo submit arguments-parameters.yaml --parameter-file params.yaml +``` + +Command-line parameters can also be used to override the default entrypoint and invoke any template in the workflow spec. For example, if you add a new version of the `whalesay` template called `whalesay-caps` but you don't want to change the default entrypoint, you can invoke this from the command line as follows: + ```sh argo submit arguments-parameters.yaml --entrypoint whalesay-caps ``` -By using a combination of the `--entrypoint` and `-p` parameters, you can invoke any template in the workflow spec with any parameter that you like. +By using a combination of the `--entrypoint` and `-p` parameters, you can call any template in the workflow spec with any parameter that you like. -The values set in the `spec.arguments.parameters` are globally scoped and can be accessed via `{{workflow.parameters.parameter_name}}`. This can be useful to pass information to multiple steps in a workflow. For example, if you wanted to run your workflows with different logging levels, set in environment of each container, you could have a set up similar to this: +The values set in the `spec.arguments.parameters` are globally scoped and can be accessed via `{{workflow.parameters.parameter_name}}`. This can be useful to pass information to multiple steps in a workflow. For example, if you wanted to run your workflows with different logging levels that are set in the environment of each container, you could have a YAML file similar to this one: ```yaml apiVersion: argoproj.io/v1alpha1 @@ -167,7 +186,7 @@ spec: entrypoint: A arguments: parameters: - - name: log_level + - name: log-level value: INFO templates: @@ -176,22 +195,23 @@ spec: image: containerA env: - name: LOG_LEVEL - value: "{{workflow.parameters.log_level}}" + value: "{{workflow.parameters.log-level}}" command: [runA] - - - name: B - container: - image: containerB - env: - - name: LOG_LEVEL - value: "{{workflow.parameters.log_level}}" - command: [runB] + - name: B + container: + image: containerB + env: + - name: LOG_LEVEL + value: "{{workflow.parameters.log-level}}" + command: [runB] ``` -In this workflow, both steps `A` and `B` would have the same log level set to `INFO` and can easily be changed between workflow submissions using the `-p` flag. +In this workflow, both steps `A` and `B` would have the same log-level set to `INFO` and can easily be changed between workflow submissions using the `-p` flag. ## Steps -In this example, we'll see how to create multi-step workflows as well as how to define more than one template in a workflow spec and how to create nested workflows. Be sure to read the comments. They provide useful explanations. +In this example, we'll see how to create multi-step workflows, how to define more than one template in a workflow spec, and how to create nested workflows. Be sure to read the comments as they provide useful explanations. + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -206,19 +226,19 @@ spec: # Instead of just running a container # This template has a sequence of steps steps: - - - name: hello1 #hello1 is run before the following steps + - - name: hello1 # hello1 is run before the following steps template: whalesay arguments: parameters: - name: message value: "hello1" - - - name: hello2a #double dash => run after previous step + - - name: hello2a # double dash => run after previous step template: whalesay arguments: parameters: - name: message value: "hello2a" - - name: hello2b #single dash => run in parallel with previous step + - name: hello2b # single dash => run in parallel with previous step template: whalesay arguments: parameters: @@ -235,11 +255,10 @@ spec: command: [cowsay] args: ["{{inputs.parameters.message}}"] ``` -The above workflow spec prints three different flavors of "hello". -The `hello-hello-hello` template consists of three `steps`. -The first step named `hello1` will be run in sequence whereas the next two steps named `hello2a` and `hello2b` will be run in parallel with each other. -Using the argo CLI command, we can graphically display the execution history of this workflow spec, which shows that the steps named `hello2a` and `hello2b` ran in parallel with each other. -``` + +The above workflow spec prints three different flavors of "hello". The `hello-hello-hello` template consists of three `steps`. The first step named `hello1` will be run in sequence whereas the next two steps named `hello2a` and `hello2b` will be run in parallel with each other. Using the argo CLI command, we can graphically display the execution history of this workflow spec, which shows that the steps named `hello2a` and `hello2b` ran in parallel with each other. + +```sh STEP PODNAME ✔ arguments-parameters-rbm92 ├---✔ hello1 steps-rbm92-2023062412 @@ -249,12 +268,9 @@ STEP PODNAME ## DAG -As an alternative to specifying sequences of steps, you can define the workflow as a graph by specifying the dependencies of each task. -This can be simpler to maintain for complex workflows and allows for maximum parallelism when running tasks. +As an alternative to specifying sequences of steps, you can define the workflow as a directed-acyclic graph (DAG) by specifying the dependencies of each task. This can be simpler to maintain for complex workflows and allows for maximum parallelism when running tasks. -In the following workflow, step `A` runs first, as it has no dependencies. -Once `A` has finished, steps `B` and `C` run in parallel. -Finally, once `B` and `C` have completed, step `D` can run. +In the following workflow, step `A` runs first, as it has no dependencies. Once `A` has finished, steps `B` and `C` run in parallel. Finally, once `B` and `C` have completed, step `D` can run. ```yaml apiVersion: argoproj.io/v1alpha1 @@ -295,18 +311,18 @@ spec: parameters: [{name: message, value: D}] ``` -The dependency graph may have [multiple roots](./dag-multiroot.yaml). -The templates called from a dag or steps template can themselves be dag or steps templates. This can allow for complex workflows to be split into manageable pieces. +The dependency graph may have [multiple roots](./dag-multiroot.yaml). The templates called from a DAG or steps template can themselves be DAG or steps templates. This can allow for complex workflows to be split into manageable pieces. ## Artifacts **Note:** You will need to have configured an artifact repository to run this example. -[Configuring an artifact repository here](https://github.com/argoproj/argo/blob/master/ARTIFACT_REPO.md). +[Configuring an artifact repository here](https://github.com/cyrusbiotechnology/argo/blob/master/ARTIFACT_REPO.md). When running workflows, it is very common to have steps that generate or consume artifacts. Often, the output artifacts of one step may be used as input artifacts to a subsequent step. -The below workflow spec consists of two steps that run in sequence. The first step named `generate-artifact` will generate an artifact using the `whalesay` template which will be consumed by the second step named `print-message` that consumes the generated artifact. +The below workflow spec consists of two steps that run in sequence. The first step named `generate-artifact` will generate an artifact using the `whalesay` template that will be consumed by the second step named `print-message` that then consumes the generated artifact. + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -352,14 +368,14 @@ spec: command: [sh, -c] args: ["cat /tmp/message"] ``` -The `whalesay` template uses the `cowsay` command to generate a file named `/tmp/hello-world.txt`. It then `outputs` this file as an artifact named `hello-art`. In general, the artifact's `path` may be a directory rather than just a file. -The `print-message` template takes an input artifact named `message`, unpacks it at the `path` named `/tmp/message` and then prints the contents of `/tmp/message` using the `cat` command. -The `artifact-example` template passes the `hello-art` artifact generated as an output of the `generate-artifact` step as the `message` input artifact to the `print-message` step. -DAG templates use the tasks prefix to refer to another task, for example `{{tasks.generate-artifact.outputs.artifacts.hello-art}}`. + +The `whalesay` template uses the `cowsay` command to generate a file named `/tmp/hello-world.txt`. It then `outputs` this file as an artifact named `hello-art`. In general, the artifact's `path` may be a directory rather than just a file. The `print-message` template takes an input artifact named `message`, unpacks it at the `path` named `/tmp/message` and then prints the contents of `/tmp/message` using the `cat` command. +The `artifact-example` template passes the `hello-art` artifact generated as an output of the `generate-artifact` step as the `message` input artifact to the `print-message` step. DAG templates use the tasks prefix to refer to another task, for example `{{tasks.generate-artifact.outputs.artifacts.hello-art}}`. ## The Structure of Workflow Specs -We now know enough about the basic components of a workflow spec to review its basic structure. +We now know enough about the basic components of a workflow spec to review its basic structure: + - Kubernetes header including metadata - Spec body - Entrypoint invocation with optionally arguments @@ -374,11 +390,11 @@ We now know enough about the basic components of a workflow spec to review its b To summarize, workflow specs are composed of a set of Argo templates where each template consists of an optional input section, an optional output section and either a container invocation or a list of steps where each step invokes another template. -Note that the controller section of the workflow spec will accept the same options as the controller section of a pod spec, including but not limited to env vars, secrets, and volume mounts. Similarly, for volume claims and volumes. +Note that the controller section of the workflow spec will accept the same options as the controller section of a pod spec, including but not limited to environment variables, secrets, and volume mounts. Similarly, for volume claims and volumes. ## Secrets -Argo supports the same secrets syntax and mechanisms as Kubernetes Pod specs, which allows access to secrets as environment variables or volume mounts. -- https://kubernetes.io/docs/concepts/configuration/secret/ + +Argo supports the same secrets syntax and mechanisms as Kubernetes Pod specs, which allows access to secrets as environment variables or volume mounts. See the [Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/) for more information. ```yaml # To run this example, first create the secret by running: @@ -418,7 +434,9 @@ spec: ``` ## Scripts & Results -Often times, we just want a template that executes a script specified as a here-script (aka. here document) in the workflow spec. + +Often, we just want a template that executes a script specified as a here-script (also known as a `here document`) in the workflow spec. This example shows how to do that: + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -471,13 +489,15 @@ spec: command: [sh, -c] args: ["echo result was: {{inputs.parameters.message}}"] ``` -The `script` keyword allows the specification of the script body using the `source` tag. This creates a temporary file containing the script body and then passes the name of the temporary file as the final parameter to `command`, which should be an interpreter that executes the script body.. + +The `script` keyword allows the specification of the script body using the `source` tag. This creates a temporary file containing the script body and then passes the name of the temporary file as the final parameter to `command`, which should be an interpreter that executes the script body. The use of the `script` feature also assigns the standard output of running the script to a special output parameter named `result`. This allows you to use the result of running the script itself in the rest of the workflow spec. In this example, the result is simply echoed by the print-message template. ## Output Parameters Output parameters provide a general mechanism to use the result of a step as a parameter rather than as an artifact. This allows you to use the result from any type of step, not just a `script`, for conditional tests, loops, and arguments. Output parameters work similarly to `script result` except that the value of the output parameter is set to the contents of a generated file rather than the contents of `stdout`. + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -502,12 +522,12 @@ spec: container: image: docker/whalesay:latest command: [sh, -c] - args: ["echo -n hello world > /tmp/hello_world.txt"] #generate the content of hello_world.txt + args: ["echo -n hello world > /tmp/hello_world.txt"] # generate the content of hello_world.txt outputs: parameters: - - name: hello-param #name of output parameter + - name: hello-param # name of output parameter valueFrom: - path: /tmp/hello_world.txt #set the value of hello-param to the contents of this hello-world.txt + path: /tmp/hello_world.txt # set the value of hello-param to the contents of this hello-world.txt - name: print-message inputs: @@ -523,7 +543,8 @@ DAG templates use the tasks prefix to refer to another task, for example `{{task ## Loops -When writing workflows, it is often very useful to be able to iterate over a set of inputs. +When writing workflows, it is often very useful to be able to iterate over a set of inputs as shown in this example: + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -540,9 +561,9 @@ spec: parameters: - name: message value: "{{item}}" - withItems: #invoke whalesay once for each item in parallel - - hello world #item 1 - - goodbye world #item 2 + withItems: # invoke whalesay once for each item in parallel + - hello world # item 1 + - goodbye world # item 2 - name: whalesay inputs: @@ -554,7 +575,8 @@ spec: args: ["{{inputs.parameters.message}}"] ``` -We can also iterate over a sets of items. +We can also iterate over sets of items: + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -590,7 +612,8 @@ spec: args: [/etc/os-release] ``` -We can pass lists of items as parameters. +We can pass lists of items as parameters: + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -600,7 +623,7 @@ spec: entrypoint: loop-param-arg-example arguments: parameters: - - name: os-list #a list of items + - name: os-list # a list of items value: | [ { "image": "debian", "tag": "9.1" }, @@ -623,7 +646,7 @@ spec: value: "{{item.image}}" - name: tag value: "{{item.tag}}" - withParam: "{{inputs.parameters.os-list}}" #parameter specifies the list to iterate over + withParam: "{{inputs.parameters.os-list}}" # parameter specifies the list to iterate over # This template is the same as in the previous example - name: cat-os-release @@ -638,6 +661,7 @@ spec: ``` We can even dynamically generate the list of items to iterate over! + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -680,7 +704,9 @@ spec: ``` ## Conditionals -We also support conditional execution. + +We also support conditional execution as shown in this example: + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -696,10 +722,10 @@ spec: template: flip-coin # evaluate the result in parallel - - name: heads - template: heads #invoke heads template if "heads" + template: heads # call heads template if "heads" when: "{{steps.flip-coin.outputs.result}} == heads" - name: tails - template: tails #invoke tails template if "tails" + template: tails # call tails template if "tails" when: "{{steps.flip-coin.outputs.result}} == tails" # Return heads or tails based on a random number @@ -726,7 +752,9 @@ spec: ``` ## Recursion + Templates can recursively invoke each other! In this variation of the above coin-flip template, we continue to flip coins until it comes up heads. + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -742,9 +770,9 @@ spec: template: flip-coin # evaluate the result in parallel - - name: heads - template: heads #invoke heads template if "heads" + template: heads # call heads template if "heads" when: "{{steps.flip-coin.outputs.result}} == heads" - - name: tails #keep flipping coins if "tails" + - name: tails # keep flipping coins if "tails" template: coinflip when: "{{steps.flip-coin.outputs.result}} == tails" @@ -765,7 +793,8 @@ spec: ``` Here's the result of a couple of runs of coinflip for comparison. -``` + +```sh argo get coinflip-recursive-tzcb5 STEP PODNAME MESSAGE @@ -789,17 +818,18 @@ STEP PODNAME MESSAGE └-·-✔ heads coinflip-recursive-tzcb5-4080323273 └-○ tails ``` -In the first run, the coin immediately comes up heads and we stop. -In the second run, the coin comes up tail three times before it finally comes up heads and we stop. + +In the first run, the coin immediately comes up heads and we stop. In the second run, the coin comes up tail three times before it finally comes up heads and we stop. ## Exit handlers -An exit handler is a template that always executes, irrespective of success or failure, at the end of the workflow. +An exit handler is a template that *always* executes, irrespective of success or failure, at the end of the workflow. Some common use cases of exit handlers are: + - cleaning up after a workflow runs -- sending notifications of workflow status (e.g. e-mail/slack) -- posting the pass/fail status to a webhook result (e.g. github build result) +- sending notifications of workflow status (e.g., e-mail/Slack) +- posting the pass/fail status to a webhook result (e.g. GitHub build result) - resubmitting or submitting another workflow ```yaml @@ -809,7 +839,7 @@ metadata: generateName: exit-handlers- spec: entrypoint: intentional-fail - onExit: exit-handler #invoke exit-hander template at end of the workflow + onExit: exit-handler # invoke exit-hander template at end of the workflow templates: # primary workflow template - name: intentional-fail @@ -850,7 +880,9 @@ spec: ``` ## Timeouts -To limit the elapsed time for a workflow, you can set `activeDeadlineSeconds`. + +To limit the elapsed time for a workflow, you can set the variable `activeDeadlineSeconds`. + ```yaml # To enforce a timeout for a container template, specify a value for activeDeadlineSeconds. apiVersion: argoproj.io/v1alpha1 @@ -865,11 +897,13 @@ spec: image: alpine:latest command: [sh, -c] args: ["echo sleeping for 1m; sleep 60; echo done"] - activeDeadlineSeconds: 10 #terminate container template after 10 seconds + activeDeadlineSeconds: 10 # terminate container template after 10 seconds ``` ## Volumes + The following example dynamically creates a volume and then uses the volume in a two step workflow. + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -877,14 +911,14 @@ metadata: generateName: volumes-pvc- spec: entrypoint: volumes-pvc-example - volumeClaimTemplates: #define volume, same syntax as k8s Pod spec + volumeClaimTemplates: # define volume, same syntax as k8s Pod spec - metadata: - name: workdir #name of volume claim + name: workdir # name of volume claim spec: accessModes: [ "ReadWriteOnce" ] resources: requests: - storage: 1Gi #Gi => 1024 * 1024 * 1024 + storage: 1Gi # Gi => 1024 * 1024 * 1024 templates: - name: volumes-pvc-example @@ -900,7 +934,7 @@ spec: command: [sh, -c] args: ["echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt"] # Mount workdir volume at /mnt/vol before invoking docker/whalesay - volumeMounts: #same syntax as k8s Pod spec + volumeMounts: # same syntax as k8s Pod spec - name: workdir mountPath: /mnt/vol @@ -910,15 +944,16 @@ spec: command: [sh, -c] args: ["echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt"] # Mount workdir volume at /mnt/vol before invoking docker/whalesay - volumeMounts: #same syntax as k8s Pod spec + volumeMounts: # same syntax as k8s Pod spec - name: workdir mountPath: /mnt/vol ``` -Volumes are a very useful way to move large amounts of data from one step in a workflow to another. -Depending on the system, some volumes may be accessible concurrently from multiple steps. + +Volumes are a very useful way to move large amounts of data from one step in a workflow to another. Depending on the system, some volumes may be accessible concurrently from multiple steps. In some cases, you want to access an already existing volume rather than creating/destroying one dynamically. + ```yaml # Define Kubernetes PVC kind: PersistentVolumeClaim @@ -973,7 +1008,9 @@ spec: ``` ## Daemon Containers -Argo workflows can start containers that run in the background (aka. daemon containers) while the workflow itself continues execution. The daemons will be automatically destroyed when the workflow exits the template scope in which the daemon was invoked. Deamons containers are useful for starting up services to be tested or to be used in testing (aka. fixtures). We also find it very useful when running large simulations to spin up a database as a daemon for collecting and organizing the results. The big advantage of daemons compared with sidecars is that their existence can persist across multiple steps or even the entire workflow. + +Argo workflows can start containers that run in the background (also known as `daemon containers`) while the workflow itself continues execution. Note that the daemons will be *automatically destroyed* when the workflow exits the template scope in which the daemon was invoked. Daemon containers are useful for starting up services to be tested or to be used in testing (e.g., fixtures). We also find it very useful when running large simulations to spin up a database as a daemon for collecting and organizing the results. The big advantage of daemons compared with sidecars is that their existence can persist across multiple steps or even the entire workflow. + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -985,35 +1022,35 @@ spec: - name: daemon-example steps: - - name: influx - template: influxdb #start an influxdb as a daemon (see the influxdb template spec below) + template: influxdb # start an influxdb as a daemon (see the influxdb template spec below) - - - name: init-database #initialize influxdb + - - name: init-database # initialize influxdb template: influxdb-client arguments: parameters: - name: cmd value: curl -XPOST 'http://{{steps.influx.ip}}:8086/query' --data-urlencode "q=CREATE DATABASE mydb" - - - name: producer-1 #add entries to influxdb + - - name: producer-1 # add entries to influxdb template: influxdb-client arguments: parameters: - name: cmd value: for i in $(seq 1 20); do curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d "cpu,host=server01,region=uswest load=$i" ; sleep .5 ; done - - name: producer-2 #add entries to influxdb + - name: producer-2 # add entries to influxdb template: influxdb-client arguments: parameters: - name: cmd value: for i in $(seq 1 20); do curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d "cpu,host=server02,region=uswest load=$((RANDOM % 100))" ; sleep .5 ; done - - name: producer-3 #add entries to influxdb + - name: producer-3 # add entries to influxdb template: influxdb-client arguments: parameters: - name: cmd value: curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d 'cpu,host=server03,region=useast load=15.4' - - - name: consumer #consume intries from influxdb + - - name: consumer # consume intries from influxdb template: influxdb-client arguments: parameters: @@ -1021,11 +1058,11 @@ spec: value: curl --silent -G http://{{steps.influx.ip}}:8086/query?pretty=true --data-urlencode "db=mydb" --data-urlencode "q=SELECT * FROM cpu" - name: influxdb - daemon: true #start influxdb as a daemon + daemon: true # start influxdb as a daemon container: image: influxdb:1.2 - restartPolicy: Always #restart container if it fails - readinessProbe: #wait for readinessProbe to succeed + restartPolicy: Always # restart container if it fails + readinessProbe: # wait for readinessProbe to succeed httpGet: path: /ping port: 8086 @@ -1047,8 +1084,9 @@ spec: DAG templates use the tasks prefix to refer to another task, for example `{{tasks.influx.ip}}`. ## Sidecars -A sidecar is another container that executes concurrently in the same pod as the "main" container and is useful -in creating multi-container pods. + +A sidecar is another container that executes concurrently in the same pod as the main container and is useful in creating multi-container pods. + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -1068,10 +1106,13 @@ spec: - name: nginx image: nginx:1.13 ``` -In the above example, we create a sidecar container that runs nginx as a simple web server. The order in which containers may come up is random. This is why the 'main' container polls the nginx container until it is ready to service requests. This is a good design pattern when designing multi-container systems. Always wait for any services you need to come up before running your main code. + +In the above example, we create a sidecar container that runs nginx as a simple web server. The order in which containers come up is random, so in this example the main container polls the nginx container until it is ready to service requests. This is a good design pattern when designing multi-container systems: always wait for any services you need to come up before running your main code. ## Hardwired Artifacts -With Argo, you can use any container image that you like to generate any kind of artifact. In practice, however, we find certain types of artifacts are very common and provide a more convenient way to generate and use these artifacts. In particular, we have "hardwired" support for git, http and s3 artifacts. + +With Argo, you can use any container image that you like to generate any kind of artifact. In practice, however, we find certain types of artifacts are very common, so there is built-in support for git, http, and s3 artifacts. + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -1088,7 +1129,7 @@ spec: - name: argo-source path: /src git: - repo: https://github.com/argoproj/argo.git + repo: https://github.com/cyrusbiotechnology/argo.git revision: "master" # Download kubectl 1.8.0 and place it at /bin/kubectl - name: kubectl @@ -1115,10 +1156,10 @@ spec: args: ["ls -l /src /bin/kubectl /s3"] ``` - ## Kubernetes Resources In many cases, you will want to manage Kubernetes resources from Argo workflows. The resource template allows you to create, delete or updated any type of Kubernetes resource. + ```yaml # in a workflow. The resource template type accepts any k8s manifest # (including CRDs) and can perform any kubectl action against it (e.g. create, @@ -1131,8 +1172,8 @@ spec: entrypoint: pi-tmpl templates: - name: pi-tmpl - resource: #indicates that this is a resource template - action: create #can be any kubectl action (e.g. create, delete, apply, patch) + resource: # indicates that this is a resource template + action: create # can be any kubectl action (e.g. create, delete, apply, patch) # The successCondition and failureCondition are optional expressions. # If failureCondition is true, the step is considered failed. # If successCondition is true, the step is considered successful. @@ -1162,9 +1203,43 @@ spec: Resources created in this way are independent of the workflow. If you want the resource to be deleted when the workflow is deleted then you can use [Kubernetes garbage collection](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) with the workflow resource as an owner reference ([example](./k8s-owner-reference.yaml)). -## Docker-in-Docker (aka. DinD) Using Sidecars -An application of sidecars is to implement DinD (Docker-in-Docker). -DinD is useful when you want to run Docker commands from inside a container. For example, you may want to build and push a container image from inside your build container. In the following example, we use the docker:dind container to run a Docker daemon in a sidecar and give the main container access to the daemon. +**Note:** +When patching, the resource will accept another attribute, `mergeStrategy`, which can either be `strategic`, `merge`, or `json`. If this attribute is not supplied, it will default to `strategic`. Keep in mind that Custom Resources cannot be patched with `strategic`, so a different strategy must be chosen. For example, suppose you have the [CronTab CustomResourceDefinition](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#create-a-customresourcedefinition) defined, and the following instance of a CronTab: + +```yaml +apiVersion: "stable.example.com/v1" +kind: CronTab +spec: + cronSpec: "* * * * */5" + image: my-awesome-cron-image +``` + +This Crontab can be modified using the following Argo Workflow: + +```yaml +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: k8s-patch- +spec: + entrypoint: cront-tmpl + templates: + - name: cront-tmpl + resource: + action: patch + mergeStrategy: merge # Must be one of [strategic merge json] + manifest: | + apiVersion: "stable.example.com/v1" + kind: CronTab + spec: + cronSpec: "* * * * */10" + image: my-awesome-cron-image +``` + +## Docker-in-Docker Using Sidecars + +An application of sidecars is to implement Docker-in-Docker (DinD). DinD is useful when you want to run Docker commands from inside a container. For example, you may want to build and push a container image from inside your build container. In the following example, we use the docker:dind container to run a Docker daemon in a sidecar and give the main container access to the daemon. + ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -1179,13 +1254,13 @@ spec: command: [sh, -c] args: ["until docker ps; do sleep 3; done; docker run --rm debian:latest cat /etc/os-release"] env: - - name: DOCKER_HOST #the docker daemon can be access on the standard port on localhost + - name: DOCKER_HOST # the docker daemon can be access on the standard port on localhost value: 127.0.0.1 sidecars: - name: dind - image: docker:17.10-dind #Docker already provides an image for running a Docker daemon + image: docker:17.10-dind # Docker already provides an image for running a Docker daemon securityContext: - privileged: true #the Docker daemon can only run in a privileged container + privileged: true # the Docker daemon can only run in a privileged container # mirrorVolumeMounts will mount the same volumes specified in the main container # to the sidecar (including artifacts), at the same mountPaths. This enables # dind daemon to (partially) see the same filesystem as the main container in @@ -1193,7 +1268,49 @@ spec: mirrorVolumeMounts: true ``` -## Continuous integration example +## Custom Template Variable Reference + +In this example, we can see how we can use the other template language variable reference (E.g: Jinja) in Argo workflow template. +Argo will validate and resolve only the variable that starts with Argo allowed prefix +{***"item", "steps", "inputs", "outputs", "workflow", "tasks"***} + +```yaml +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: custom-template-variable- +spec: + entrypoint: hello-hello-hello + + templates: + - name: hello-hello-hello + steps: + - - name: hello1 + template: whalesay + arguments: + parameters: [{name: message, value: "hello1"}] + - - name: hello2a + template: whalesay + arguments: + parameters: [{name: message, value: "hello2a"}] + - name: hello2b + template: whalesay + arguments: + parameters: [{name: message, value: "hello2b"}] + + - name: whalesay + inputs: + parameters: + - name: message + container: + image: docker/whalesay + command: [cowsay] + args: ["{{user.username}}"] + +``` + +## Continuous Integration Example + Continuous integration is a popular application for workflows. Currently, Argo does not provide event triggers for automatically kicking off your CI jobs, but we plan to do so in the near future. Until then, you can easily write a cron job that checks for new commits and kicks off the needed workflow, or use your existing Jenkins server to kick off the workflow. -A good example of a CI workflow spec is provided at https://github.com/argoproj/argo/tree/master/examples/influxdb-ci.yaml. Because it just uses the concepts that we've already covered and is somewhat long, we don't go into details here. +A good example of a CI workflow spec is provided at https://github.com/cyrusbiotechnology/argo/tree/master/examples/influxdb-ci.yaml. Because it just uses the concepts that we've already covered and is somewhat long, we don't go into details here. diff --git a/examples/artifact-disable-archive.yaml b/examples/artifact-disable-archive.yaml new file mode 100644 index 000000000000..444b01ac5b53 --- /dev/null +++ b/examples/artifact-disable-archive.yaml @@ -0,0 +1,51 @@ +# This example demonstrates the ability to disable the default behavior of archiving (tar.gz) +# when saving output artifacts. For directories, when archive is set to none, files in directory +# will be copied recursively in the case of S3. +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: artifact-disable-archive- +spec: + entrypoint: artifact-disable-archive + templates: + - name: artifact-disable-archive + steps: + - - name: generate-artifact + template: whalesay + - - name: consume-artifact + template: print-message + arguments: + artifacts: + - name: etc + from: "{{steps.generate-artifact.outputs.artifacts.etc}}" + - name: hello-txt + from: "{{steps.generate-artifact.outputs.artifacts.hello-txt}}" + + - name: whalesay + container: + image: docker/whalesay:latest + command: [sh, -c] + args: ["cowsay hello world | tee /tmp/hello_world.txt ; sleep 1"] + outputs: + artifacts: + - name: etc + path: /etc + archive: + none: {} + - name: hello-txt + path: /tmp/hello_world.txt + archive: + none: {} + + - name: print-message + inputs: + artifacts: + - name: etc + path: /tmp/etc + - name: hello-txt + path: /tmp/hello.txt + container: + image: alpine:latest + command: [sh, -c] + args: + - cat /tmp/hello.txt && cd /tmp/etc && find . diff --git a/examples/artifact-passing.yaml b/examples/artifact-passing.yaml index dd301b9ac116..90fdeacd3728 100644 --- a/examples/artifact-passing.yaml +++ b/examples/artifact-passing.yaml @@ -22,7 +22,7 @@ spec: container: image: docker/whalesay:latest command: [sh, -c] - args: ["cowsay hello world | tee /tmp/hello_world.txt"] + args: ["sleep 1; cowsay hello world | tee /tmp/hello_world.txt"] outputs: artifacts: - name: hello-art diff --git a/examples/artifact-path-placeholders.yaml b/examples/artifact-path-placeholders.yaml new file mode 100644 index 000000000000..3371b5e893c5 --- /dev/null +++ b/examples/artifact-path-placeholders.yaml @@ -0,0 +1,40 @@ +# This example demonstrates the how to refer to input and output artifact paths. +# Referring to the path instead of copy/pasting it prevents errors when paths change. +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: artifact-path-placeholders- +spec: + entrypoint: head-lines + arguments: + parameters: + - name: lines-count + value: 3 + artifacts: + - name: text + raw: + data: | + 1 + 2 + 3 + 4 + 5 + templates: + - name: head-lines + inputs: + parameters: + - name: lines-count + artifacts: + - name: text + path: /inputs/text/data + outputs: + parameters: + - name: actual-lines-count + valueFrom: + path: /outputs/actual-lines-count/data + artifacts: + - name: text + path: /outputs/text/data + container: + image: busybox + command: [sh, -c, 'head -n {{inputs.parameters.lines-count}} <"{{inputs.artifacts.text.path}}" | tee "{{outputs.artifacts.text.path}}" | wc -l > "{{outputs.parameters.actual-lines-count.path}}"'] diff --git a/examples/ci-output-artifact.yaml b/examples/ci-output-artifact.yaml index fababef1bb13..591fec3cea39 100644 --- a/examples/ci-output-artifact.yaml +++ b/examples/ci-output-artifact.yaml @@ -1,7 +1,7 @@ apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: - generateName: ci-example- + generateName: ci-output-artifact- spec: entrypoint: ci-example # a temporary volume, named workdir, will be used as a working @@ -75,7 +75,10 @@ spec: - name: release-artifact container: - image: debian:9.4 + image: alpine:3.8 + volumeMounts: + - name: workdir + mountPath: /go outputs: artifacts: - name: release diff --git a/examples/continue-on-fail.yaml b/examples/continue-on-fail.yaml new file mode 100644 index 000000000000..7681e99c597f --- /dev/null +++ b/examples/continue-on-fail.yaml @@ -0,0 +1,36 @@ +# Example on specifying parallelism on the outer workflow and limiting the number of its +# children workflowss to be run at the same time. +# +# If the parallelism of A is 1, the four steps of seq-step will run sequentially. + +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: continue-on-fail- +spec: + entrypoint: workflow-ignore + templates: + - name: workflow-ignore + steps: + - - name: A + template: whalesay + - - name: B + template: whalesay + - name: C + template: intentional-fail + continueOn: + failed: true + - - name: D + template: whalesay + + - name: whalesay + container: + image: docker/whalesay:latest + command: [cowsay] + args: ["hello world"] + + - name: intentional-fail + container: + image: alpine:latest + command: [sh, -c] + args: ["echo intentional failure; exit 1"] diff --git a/examples/dag-continue-on-fail.yaml b/examples/dag-continue-on-fail.yaml new file mode 100644 index 000000000000..dc9600babb52 --- /dev/null +++ b/examples/dag-continue-on-fail.yaml @@ -0,0 +1,44 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: dag-contiue-on-fail- +spec: + entrypoint: workflow + templates: + - name: workflow + dag: + tasks: + - name: A + template: whalesay + - name: B + dependencies: [A] + template: intentional-fail + continueOn: + failed: true + - name: C + dependencies: [A] + template: whalesay + - name: D + dependencies: [B, C] + template: whalesay + - name: E + dependencies: [A] + template: intentional-fail + - name: F + dependencies: [A] + template: whalesay + - name: G + dependencies: [E, F] + template: whalesay + + - name: whalesay + container: + image: docker/whalesay:latest + command: [cowsay] + args: ["hello world"] + + - name: intentional-fail + container: + image: alpine:latest + command: [sh, -c] + args: ["echo intentional failure; exit 1"] \ No newline at end of file diff --git a/examples/dns-config.yaml b/examples/dns-config.yaml new file mode 100644 index 000000000000..35a621864827 --- /dev/null +++ b/examples/dns-config.yaml @@ -0,0 +1,22 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Workflow # new type of k8s spec +metadata: + generateName: test-dns-config- # name of the workflow spec +spec: + entrypoint: whalesay # invoke the whalesay template + templates: + - name: whalesay # name of the template + container: + image: docker/whalesay + command: [cowsay] + args: ["hello world"] + resources: # limit the resources + limits: + memory: 32Mi + cpu: 100m + dnsConfig: + nameservers: + - 1.2.3.4 + options: + - name: ndots + value: "2" \ No newline at end of file diff --git a/examples/extended-errors.yaml b/examples/extended-errors.yaml new file mode 100644 index 000000000000..4fc948129376 --- /dev/null +++ b/examples/extended-errors.yaml @@ -0,0 +1,40 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: hello-world- +spec: + entrypoint: error-steps + templates: + - name: error-steps + steps: + - - name: warning-file + template: cowsay-file + - - name: error-stdout + template: cowsay-stdout + - name: cowsay-file + container: + image: docker/whalesay:latest + command: [sh, -c] + args: ["cowsay 'some message'> /tmp/output.txt"] + + warnings: + - name: WrongWord + source: /tmp/output.txt + patternMatched: "some.*" + message: "the word 'some' shouldn't be here" + + - name: cowsay-stdout + container: + image: docker/whalesay:latest + command: [cowsay] + args: ["what planet"] + errors: + - name: FailToSayHello + source: stdout + patternUnmatched: ".*hello.*" + message: "did't say hello" + warnings: + - name: NoPlanets + source: stdout + patternMatched: ".*planet.*" + message: "used the wrong word" \ No newline at end of file diff --git a/examples/global-outputs.yaml b/examples/global-outputs.yaml index e621b7c1fb5f..f2a270aa6141 100644 --- a/examples/global-outputs.yaml +++ b/examples/global-outputs.yaml @@ -19,7 +19,7 @@ spec: container: image: alpine:3.7 command: [sh, -c] - args: ["echo -n hello world > /tmp/hello_world.txt"] + args: ["sleep 1; echo -n hello world > /tmp/hello_world.txt"] outputs: parameters: # export a global parameter. The parameter will be programatically available in the completed diff --git a/examples/hdfs-artifact.yaml b/examples/hdfs-artifact.yaml new file mode 100644 index 000000000000..0031b756387f --- /dev/null +++ b/examples/hdfs-artifact.yaml @@ -0,0 +1,81 @@ +# This example demonstrates the use of hdfs as the store for artifacts. This example assumes the following: +# 1. you have hdfs running in the same namespace as where this workflow will be run and you have created a repo with the name "generic-local" +# 2. you have created a kubernetes secret for storing hdfs username/password. To create kubernetes secret required for this example, +# run the following command: +# $ kubectl create secret generic my-hdfs-credentials --from-literal=username= --from-literal=password= + +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: hdfs-artifact- +spec: + entrypoint: artifact-example + templates: + - name: artifact-example + steps: + - - name: generate-artifact + template: whalesay + - - name: consume-artifact + template: print-message + arguments: + artifacts: + - name: message + from: "{{steps.generate-artifact.outputs.artifacts.hello-art}}" + + - name: whalesay + container: + image: docker/whalesay:latest + command: [sh, -c] + args: ["cowsay hello world | tee /tmp/hello_world.txt"] + outputs: + artifacts: + - name: hello-art + path: /tmp/hello_world.txt + hdfs: + addresses: + - my-hdfs-namenode-0.my-hdfs-namenode.default.svc.cluster.local:8020 + - my-hdfs-namenode-1.my-hdfs-namenode.default.svc.cluster.local:8020 + path: "/tmp/argo/foo" + hdfsUser: root + force: true + # krbCCacheSecret: + # name: krb + # key: krb5cc_0 + # krbKeytabSecret: + # name: krb + # key: user1.keytab + # krbUsername: "user1" + # krbRealm: "MYCOMPANY.COM" + # krbConfigConfigMap: + # name: my-hdfs-krb5-config + # key: krb5.conf + # krbServicePrincipalName: hdfs/_HOST + + - name: print-message + inputs: + artifacts: + - name: message + path: /tmp/message + hdfs: + addresses: + - my-hdfs-namenode-0.my-hdfs-namenode.default.svc.cluster.local:8020 + - my-hdfs-namenode-1.my-hdfs-namenode.default.svc.cluster.local:8020 + path: "/tmp/argo/foo" + hdfsUser: root + force: true + # krbCCacheSecret: + # name: krb + # key: krb5cc_0 + # krbKeytabSecret: + # name: krb + # key: user1.keytab + # krbUsername: "user1" + # krbRealm: "MYCOMPANY.COM" + # krbConfigConfigMap: + # name: my-hdfs-krb5-config + # key: krb5.conf + # krbServicePrincipalName: hdfs/_HOST + container: + image: alpine:latest + command: [sh, -c] + args: ["cat /tmp/message"] diff --git a/examples/influxdb-ci.yaml b/examples/influxdb-ci.yaml index 121a6fd2d8a0..d26c765fdd78 100644 --- a/examples/influxdb-ci.yaml +++ b/examples/influxdb-ci.yaml @@ -194,6 +194,10 @@ spec: - name: influxd path: /app daemon: true + outputs: + artifacts: + - name: data + path: /var/lib/influxdb/data container: image: debian:9.4 readinessProbe: diff --git a/examples/init-container.yaml b/examples/init-container.yaml new file mode 100644 index 000000000000..a113fce55f18 --- /dev/null +++ b/examples/init-container.yaml @@ -0,0 +1,22 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: init-container- +spec: + entrypoint: init-container-example + templates: + - name: init-container-example + container: + image: alpine:latest + command: ["echo", "bye"] + volumeMounts: + - name: foo + mountPath: /foo + initContainers: + - name: hello + image: alpine:latest + command: ["echo", "hello"] + mirrorVolumeMounts: true + volumes: + - name: foo + emptyDir: diff --git a/examples/input-artifact-git.yaml b/examples/input-artifact-git.yaml index 8720db459829..c373f66a59b1 100644 --- a/examples/input-artifact-git.yaml +++ b/examples/input-artifact-git.yaml @@ -13,14 +13,14 @@ spec: - name: argo-source path: /src git: - repo: https://github.com/argoproj/argo.git + repo: https://github.com/cyrusbiotechnology/argo.git revision: "v2.1.1" # For private repositories, create a k8s secret containing the git credentials and # reference the secret keys in the secret selectors: usernameSecret, passwordSecret, # or sshPrivateKeySecret. # NOTE: when authenticating via sshPrivateKeySecret, the repo URL should supplied in its # SSH format (e.g. git@github.com:argoproj/argo.git). Similarly, when authenticating via - # basic auth, the URL should be in its HTTP form (e.g. https://github.com/argoproj/argo.git) + # basic auth, the URL should be in its HTTP form (e.g. https://github.com/cyrusbiotechnology/argo.git) # usernameSecret: # name: github-creds # key: username diff --git a/examples/output-parameter.yaml b/examples/output-parameter.yaml index c9ccf686955f..d15f30f466ce 100644 --- a/examples/output-parameter.yaml +++ b/examples/output-parameter.yaml @@ -32,7 +32,7 @@ spec: container: image: docker/whalesay:latest command: [sh, -c] - args: ["echo -n hello world > /tmp/hello_world.txt"] + args: ["sleep 1; echo -n hello world > /tmp/hello_world.txt"] outputs: parameters: - name: hello-param diff --git a/examples/parameter-aggregation-dag.yaml b/examples/parameter-aggregation-dag.yaml index 49bc3bc6a24f..3a534e153bf0 100644 --- a/examples/parameter-aggregation-dag.yaml +++ b/examples/parameter-aggregation-dag.yaml @@ -49,6 +49,7 @@ spec: command: [sh, -xc] args: - | + sleep 1 && echo {{inputs.parameters.num}} > /tmp/num && if [ $(({{inputs.parameters.num}}%2)) -eq 0 ]; then echo "even" > /tmp/even; diff --git a/examples/parameter-aggregation.yaml b/examples/parameter-aggregation.yaml index f7df1f7f053a..4baec52e8af1 100644 --- a/examples/parameter-aggregation.yaml +++ b/examples/parameter-aggregation.yaml @@ -46,6 +46,7 @@ spec: command: [sh, -xc] args: - | + sleep 1 && echo {{inputs.parameters.num}} > /tmp/num && if [ $(({{inputs.parameters.num}}%2)) -eq 0 ]; then echo "even" > /tmp/even; diff --git a/examples/sidecar-dind.yaml b/examples/sidecar-dind.yaml index 467bb101dade..7bf8b67998c6 100644 --- a/examples/sidecar-dind.yaml +++ b/examples/sidecar-dind.yaml @@ -19,7 +19,7 @@ spec: value: 127.0.0.1 sidecars: - name: dind - image: docker:17.10-dind + image: docker:18.09.4-dind securityContext: privileged: true # mirrorVolumeMounts will mount the same volumes specified in the main container diff --git a/gometalinter.json b/gometalinter.json index 408aa6a9c98a..42b62bf2758f 100644 --- a/gometalinter.json +++ b/gometalinter.json @@ -19,6 +19,7 @@ ], "Exclude": [ "pkg/client", - "vendor/" + "vendor/", + ".*warning.*fmt.Fprint" ] } diff --git a/hack/gen-openapi-spec/main.go b/hack/gen-openapi-spec/main.go index 8c728cbe36cb..1603c62ae565 100644 --- a/hack/gen-openapi-spec/main.go +++ b/hack/gen-openapi-spec/main.go @@ -7,7 +7,7 @@ import ( "os" "strings" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" "github.com/go-openapi/spec" "k8s.io/kube-openapi/pkg/common" ) @@ -50,11 +50,11 @@ func main() { // swaggify converts the github package // e.g.: -// github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Workflow +// github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Workflow // to: // io.argoproj.workflow.v1alpha1.Workflow func swaggify(name string) string { - name = strings.Replace(name, "github.com/argoproj/argo/pkg/apis", "argoproj.io", -1) + name = strings.Replace(name, "github.com/cyrusbiotechnology/argo/pkg/apis", "argoproj.io", -1) parts := strings.Split(name, "/") hostParts := strings.Split(parts[0], ".") // reverses something like k8s.io to io.k8s diff --git a/hack/ssh_known_hosts b/hack/ssh_known_hosts new file mode 100644 index 000000000000..31a7bae3fce5 --- /dev/null +++ b/hack/ssh_known_hosts @@ -0,0 +1,8 @@ +# This file was automatically generated. DO NOT EDIT +bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw== +github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ== +gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY= +gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf +gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9 +ssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H +vs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H diff --git a/hack/update-codegen.sh b/hack/update-codegen.sh index 06b8b8a5682f..ecdbf027b4af 100755 --- a/hack/update-codegen.sh +++ b/hack/update-codegen.sh @@ -22,6 +22,6 @@ SCRIPT_ROOT=$(dirname ${BASH_SOURCE})/.. CODEGEN_PKG=${CODEGEN_PKG:-$(cd ${SCRIPT_ROOT}; ls -d -1 ./vendor/k8s.io/code-generator 2>/dev/null || echo ../code-generator)} ${CODEGEN_PKG}/generate-groups.sh "deepcopy,client,informer,lister" \ - github.com/argoproj/argo/pkg/client github.com/argoproj/argo/pkg/apis \ + github.com/cyrusbiotechnology/argo/pkg/client github.com/cyrusbiotechnology/argo/pkg/apis \ workflow:v1alpha1 \ --go-header-file ${SCRIPT_ROOT}/hack/custom-boilerplate.go.txt diff --git a/hack/update-manifests.sh b/hack/update-manifests.sh index b24787e3f489..e73c111ddc61 100755 --- a/hack/update-manifests.sh +++ b/hack/update-manifests.sh @@ -1,12 +1,21 @@ -#!/bin/sh +#!/bin/sh -x -e -IMAGE_NAMESPACE=${IMAGE_NAMESPACE:='argoproj'} -IMAGE_TAG=${IMAGE_TAG:='latest'} +SRCROOT="$( CDPATH='' cd -- "$(dirname "$0")/.." && pwd -P )" +AUTOGENMSG="# This is an auto-generated file. DO NOT EDIT" -autogen_warning="# This is an auto-generated file. DO NOT EDIT" +IMAGE_NAMESPACE="${IMAGE_NAMESPACE:-argoproj}" +IMAGE_TAG="${IMAGE_TAG:-latest}" -echo $autogen_warning > manifests/install.yaml -kustomize build manifests/cluster-install >> manifests/install.yaml +cd ${SRCROOT}/manifests/base && kustomize edit set image \ + argoproj/workflow-controller=${IMAGE_NAMESPACE}/workflow-controller:${IMAGE_TAG} \ + argoproj/argoui=${IMAGE_NAMESPACE}/argoui:${IMAGE_TAG} -echo $autogen_warning > manifests/namespace-install.yaml -kustomize build manifests/namespace-install >> manifests/namespace-install.yaml +echo "${AUTOGENMSG}" > "${SRCROOT}/manifests/install.yaml" +kustomize build "${SRCROOT}/manifests/cluster-install" >> "${SRCROOT}/manifests/install.yaml" +sed -i.bak "s@- .*/argoexec:.*@- ${IMAGE_NAMESPACE}/argoexec:${IMAGE_TAG}@" "${SRCROOT}/manifests/install.yaml" +rm -f "${SRCROOT}/manifests/install.yaml.bak" + +echo "${AUTOGENMSG}" > "${SRCROOT}/manifests/namespace-install.yaml" +kustomize build "${SRCROOT}/manifests/namespace-install" >> "${SRCROOT}/manifests/namespace-install.yaml" +sed -i.bak "s@- .*/argoexec:.*@- ${IMAGE_NAMESPACE}/argoexec:${IMAGE_TAG}@" "${SRCROOT}/manifests/namespace-install.yaml" +rm -f "${SRCROOT}/manifests/namespace-install.yaml.bak" diff --git a/hack/update-openapigen.sh b/hack/update-openapigen.sh index 2244ba3bffc5..3e16e7f479df 100755 --- a/hack/update-openapigen.sh +++ b/hack/update-openapigen.sh @@ -10,7 +10,7 @@ VERSION="v1alpha1" go run ${CODEGEN_PKG}/cmd/openapi-gen/main.go \ --go-header-file ${PROJECT_ROOT}/hack/custom-boilerplate.go.txt \ - --input-dirs github.com/argoproj/argo/pkg/apis/workflow/${VERSION} \ - --output-package github.com/argoproj/argo/pkg/apis/workflow/${VERSION} \ + --input-dirs github.com/cyrusbiotechnology/argo/pkg/apis/workflow/${VERSION} \ + --output-package github.com/cyrusbiotechnology/argo/pkg/apis/workflow/${VERSION} \ $@ diff --git a/hack/update-ssh-known-hosts.sh b/hack/update-ssh-known-hosts.sh new file mode 100755 index 000000000000..aa74c6489add --- /dev/null +++ b/hack/update-ssh-known-hosts.sh @@ -0,0 +1,24 @@ +#!/bin/bash + +set -e + +KNOWN_HOSTS_FILE=$(dirname "$0")/ssh_known_hosts +HEADER="# This file was automatically generated. DO NOT EDIT" +echo "$HEADER" > $KNOWN_HOSTS_FILE +ssh-keyscan github.com gitlab.com bitbucket.org ssh.dev.azure.com vs-ssh.visualstudio.com | sort -u >> $KNOWN_HOSTS_FILE +chmod 0644 $KNOWN_HOSTS_FILE + +# Public SSH keys can be verified at the following URLs: +# - github.com: https://help.github.com/articles/github-s-ssh-key-fingerprints/ +# - gitlab.com: https://docs.gitlab.com/ee/user/gitlab_com/#ssh-host-keys-fingerprints +# - bitbucket.org: https://confluence.atlassian.com/bitbucket/ssh-keys-935365775.html +# - ssh.dev.azure.com, vs-ssh.visualstudio.com: https://docs.microsoft.com/en-us/azure/devops/repos/git/use-ssh-keys-to-authenticate?view=azure-devops +diff - <(ssh-keygen -l -f $KNOWN_HOSTS_FILE | sort -k 3) < + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pkg/apis/workflow/v1alpha1/openapi_generated.go b/pkg/apis/workflow/v1alpha1/openapi_generated.go index d7401fb91e99..48e4013a7fe9 100644 --- a/pkg/apis/workflow/v1alpha1/openapi_generated.go +++ b/pkg/apis/workflow/v1alpha1/openapi_generated.go @@ -13,38 +13,46 @@ import ( func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenAPIDefinition { return map[string]common.OpenAPIDefinition{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ArchiveStrategy": schema_pkg_apis_workflow_v1alpha1_ArchiveStrategy(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Arguments": schema_pkg_apis_workflow_v1alpha1_Arguments(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Artifact": schema_pkg_apis_workflow_v1alpha1_Artifact(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ArtifactLocation": schema_pkg_apis_workflow_v1alpha1_ArtifactLocation(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ArtifactoryArtifact": schema_pkg_apis_workflow_v1alpha1_ArtifactoryArtifact(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ArtifactoryAuth": schema_pkg_apis_workflow_v1alpha1_ArtifactoryAuth(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.DAGTask": schema_pkg_apis_workflow_v1alpha1_DAGTask(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.DAGTemplate": schema_pkg_apis_workflow_v1alpha1_DAGTemplate(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.GitArtifact": schema_pkg_apis_workflow_v1alpha1_GitArtifact(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.HTTPArtifact": schema_pkg_apis_workflow_v1alpha1_HTTPArtifact(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Inputs": schema_pkg_apis_workflow_v1alpha1_Inputs(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Item": schema_pkg_apis_workflow_v1alpha1_Item(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Metadata": schema_pkg_apis_workflow_v1alpha1_Metadata(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.NoneStrategy": schema_pkg_apis_workflow_v1alpha1_NoneStrategy(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Outputs": schema_pkg_apis_workflow_v1alpha1_Outputs(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Parameter": schema_pkg_apis_workflow_v1alpha1_Parameter(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.RawArtifact": schema_pkg_apis_workflow_v1alpha1_RawArtifact(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ResourceTemplate": schema_pkg_apis_workflow_v1alpha1_ResourceTemplate(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.RetryStrategy": schema_pkg_apis_workflow_v1alpha1_RetryStrategy(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.S3Artifact": schema_pkg_apis_workflow_v1alpha1_S3Artifact(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.S3Bucket": schema_pkg_apis_workflow_v1alpha1_S3Bucket(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ScriptTemplate": schema_pkg_apis_workflow_v1alpha1_ScriptTemplate(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Sequence": schema_pkg_apis_workflow_v1alpha1_Sequence(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Sidecar": schema_pkg_apis_workflow_v1alpha1_Sidecar(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.SuspendTemplate": schema_pkg_apis_workflow_v1alpha1_SuspendTemplate(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.TarStrategy": schema_pkg_apis_workflow_v1alpha1_TarStrategy(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Template": schema_pkg_apis_workflow_v1alpha1_Template(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ValueFrom": schema_pkg_apis_workflow_v1alpha1_ValueFrom(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Workflow": schema_pkg_apis_workflow_v1alpha1_Workflow(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.WorkflowList": schema_pkg_apis_workflow_v1alpha1_WorkflowList(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.WorkflowSpec": schema_pkg_apis_workflow_v1alpha1_WorkflowSpec(ref), - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.WorkflowStep": schema_pkg_apis_workflow_v1alpha1_WorkflowStep(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ArchiveStrategy": schema_pkg_apis_workflow_v1alpha1_ArchiveStrategy(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Arguments": schema_pkg_apis_workflow_v1alpha1_Arguments(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Artifact": schema_pkg_apis_workflow_v1alpha1_Artifact(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ArtifactLocation": schema_pkg_apis_workflow_v1alpha1_ArtifactLocation(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ArtifactoryArtifact": schema_pkg_apis_workflow_v1alpha1_ArtifactoryArtifact(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ArtifactoryAuth": schema_pkg_apis_workflow_v1alpha1_ArtifactoryAuth(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ContinueOn": schema_pkg_apis_workflow_v1alpha1_ContinueOn(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.DAGTask": schema_pkg_apis_workflow_v1alpha1_DAGTask(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.DAGTemplate": schema_pkg_apis_workflow_v1alpha1_DAGTemplate(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ExceptionCondition": schema_pkg_apis_workflow_v1alpha1_ExceptionCondition(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ExceptionResult": schema_pkg_apis_workflow_v1alpha1_ExceptionResult(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.GCSArtifact": schema_pkg_apis_workflow_v1alpha1_GCSArtifact(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.GCSBucket": schema_pkg_apis_workflow_v1alpha1_GCSBucket(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.GitArtifact": schema_pkg_apis_workflow_v1alpha1_GitArtifact(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.HDFSArtifact": schema_pkg_apis_workflow_v1alpha1_HDFSArtifact(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.HDFSConfig": schema_pkg_apis_workflow_v1alpha1_HDFSConfig(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.HDFSKrbConfig": schema_pkg_apis_workflow_v1alpha1_HDFSKrbConfig(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.HTTPArtifact": schema_pkg_apis_workflow_v1alpha1_HTTPArtifact(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Inputs": schema_pkg_apis_workflow_v1alpha1_Inputs(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Item": schema_pkg_apis_workflow_v1alpha1_Item(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Metadata": schema_pkg_apis_workflow_v1alpha1_Metadata(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.NoneStrategy": schema_pkg_apis_workflow_v1alpha1_NoneStrategy(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Outputs": schema_pkg_apis_workflow_v1alpha1_Outputs(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Parameter": schema_pkg_apis_workflow_v1alpha1_Parameter(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.RawArtifact": schema_pkg_apis_workflow_v1alpha1_RawArtifact(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ResourceTemplate": schema_pkg_apis_workflow_v1alpha1_ResourceTemplate(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.RetryStrategy": schema_pkg_apis_workflow_v1alpha1_RetryStrategy(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.S3Artifact": schema_pkg_apis_workflow_v1alpha1_S3Artifact(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.S3Bucket": schema_pkg_apis_workflow_v1alpha1_S3Bucket(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ScriptTemplate": schema_pkg_apis_workflow_v1alpha1_ScriptTemplate(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Sequence": schema_pkg_apis_workflow_v1alpha1_Sequence(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.SuspendTemplate": schema_pkg_apis_workflow_v1alpha1_SuspendTemplate(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.TarStrategy": schema_pkg_apis_workflow_v1alpha1_TarStrategy(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Template": schema_pkg_apis_workflow_v1alpha1_Template(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.UserContainer": schema_pkg_apis_workflow_v1alpha1_UserContainer(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ValueFrom": schema_pkg_apis_workflow_v1alpha1_ValueFrom(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Workflow": schema_pkg_apis_workflow_v1alpha1_Workflow(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.WorkflowList": schema_pkg_apis_workflow_v1alpha1_WorkflowList(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.WorkflowSpec": schema_pkg_apis_workflow_v1alpha1_WorkflowSpec(ref), + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.WorkflowStep": schema_pkg_apis_workflow_v1alpha1_WorkflowStep(ref), } } @@ -56,19 +64,19 @@ func schema_pkg_apis_workflow_v1alpha1_ArchiveStrategy(ref common.ReferenceCallb Properties: map[string]spec.Schema{ "tar": { SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.TarStrategy"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.TarStrategy"), }, }, "none": { SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.NoneStrategy"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.NoneStrategy"), }, }, }, }, }, Dependencies: []string{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.NoneStrategy", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.TarStrategy"}, + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.NoneStrategy", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.TarStrategy"}, } } @@ -85,7 +93,7 @@ func schema_pkg_apis_workflow_v1alpha1_Arguments(ref common.ReferenceCallback) c Items: &spec.SchemaOrArray{ Schema: &spec.Schema{ SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Parameter"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Parameter"), }, }, }, @@ -98,7 +106,7 @@ func schema_pkg_apis_workflow_v1alpha1_Arguments(ref common.ReferenceCallback) c Items: &spec.SchemaOrArray{ Schema: &spec.Schema{ SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Artifact"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Artifact"), }, }, }, @@ -108,7 +116,7 @@ func schema_pkg_apis_workflow_v1alpha1_Arguments(ref common.ReferenceCallback) c }, }, Dependencies: []string{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Artifact", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Parameter"}, + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Artifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Parameter"}, } } @@ -156,31 +164,43 @@ func schema_pkg_apis_workflow_v1alpha1_Artifact(ref common.ReferenceCallback) co "s3": { SchemaProps: spec.SchemaProps{ Description: "S3 contains S3 artifact location details", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.S3Artifact"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.S3Artifact"), }, }, "git": { SchemaProps: spec.SchemaProps{ Description: "Git contains git artifact location details", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.GitArtifact"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.GitArtifact"), }, }, "http": { SchemaProps: spec.SchemaProps{ Description: "HTTP contains HTTP artifact location details", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.HTTPArtifact"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.HTTPArtifact"), }, }, "artifactory": { SchemaProps: spec.SchemaProps{ Description: "Artifactory contains artifactory artifact location details", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ArtifactoryArtifact"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ArtifactoryArtifact"), + }, + }, + "hdfs": { + SchemaProps: spec.SchemaProps{ + Description: "HDFS contains HDFS artifact location details", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.HDFSArtifact"), }, }, "raw": { SchemaProps: spec.SchemaProps{ Description: "Raw contains raw artifact location details", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.RawArtifact"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.RawArtifact"), + }, + }, + "gcs": { + SchemaProps: spec.SchemaProps{ + Description: "GCS contains GCS artifact location details", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.GCSArtifact"), }, }, "globalName": { @@ -193,7 +213,14 @@ func schema_pkg_apis_workflow_v1alpha1_Artifact(ref common.ReferenceCallback) co "archive": { SchemaProps: spec.SchemaProps{ Description: "Archive controls how the artifact will be saved to the artifact repository.", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ArchiveStrategy"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ArchiveStrategy"), + }, + }, + "optional": { + SchemaProps: spec.SchemaProps{ + Description: "Make Artifacts optional, if Artifacts doesn't generate or exist", + Type: []string{"boolean"}, + Format: "", }, }, }, @@ -201,7 +228,7 @@ func schema_pkg_apis_workflow_v1alpha1_Artifact(ref common.ReferenceCallback) co }, }, Dependencies: []string{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ArchiveStrategy", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ArtifactoryArtifact", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.GitArtifact", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.HTTPArtifact", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.RawArtifact", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.S3Artifact"}, + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ArchiveStrategy", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ArtifactoryArtifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.GCSArtifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.GitArtifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.HDFSArtifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.HTTPArtifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.RawArtifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.S3Artifact"}, } } @@ -221,38 +248,50 @@ func schema_pkg_apis_workflow_v1alpha1_ArtifactLocation(ref common.ReferenceCall "s3": { SchemaProps: spec.SchemaProps{ Description: "S3 contains S3 artifact location details", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.S3Artifact"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.S3Artifact"), }, }, "git": { SchemaProps: spec.SchemaProps{ Description: "Git contains git artifact location details", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.GitArtifact"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.GitArtifact"), }, }, "http": { SchemaProps: spec.SchemaProps{ Description: "HTTP contains HTTP artifact location details", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.HTTPArtifact"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.HTTPArtifact"), }, }, "artifactory": { SchemaProps: spec.SchemaProps{ Description: "Artifactory contains artifactory artifact location details", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ArtifactoryArtifact"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ArtifactoryArtifact"), + }, + }, + "hdfs": { + SchemaProps: spec.SchemaProps{ + Description: "HDFS contains HDFS artifact location details", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.HDFSArtifact"), }, }, "raw": { SchemaProps: spec.SchemaProps{ Description: "Raw contains raw artifact location details", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.RawArtifact"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.RawArtifact"), + }, + }, + "gcs": { + SchemaProps: spec.SchemaProps{ + Description: "GCS contains GCS artifact location details", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.GCSArtifact"), }, }, }, }, }, Dependencies: []string{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ArtifactoryArtifact", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.GitArtifact", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.HTTPArtifact", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.RawArtifact", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.S3Artifact"}, + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ArtifactoryArtifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.GCSArtifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.GitArtifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.HDFSArtifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.HTTPArtifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.RawArtifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.S3Artifact"}, } } @@ -316,6 +355,31 @@ func schema_pkg_apis_workflow_v1alpha1_ArtifactoryAuth(ref common.ReferenceCallb } } +func schema_pkg_apis_workflow_v1alpha1_ContinueOn(ref common.ReferenceCallback) common.OpenAPIDefinition { + return common.OpenAPIDefinition{ + Schema: spec.Schema{ + SchemaProps: spec.SchemaProps{ + Description: "ContinueOn defines if a workflow should continue even if a task or step fails/errors. It can be specified if the workflow should continue when the pod errors, fails or both.", + Properties: map[string]spec.Schema{ + "error": { + SchemaProps: spec.SchemaProps{ + Type: []string{"boolean"}, + Format: "", + }, + }, + "failed": { + SchemaProps: spec.SchemaProps{ + Type: []string{"boolean"}, + Format: "", + }, + }, + }, + }, + }, + Dependencies: []string{}, + } +} + func schema_pkg_apis_workflow_v1alpha1_DAGTask(ref common.ReferenceCallback) common.OpenAPIDefinition { return common.OpenAPIDefinition{ Schema: spec.Schema{ @@ -339,7 +403,7 @@ func schema_pkg_apis_workflow_v1alpha1_DAGTask(ref common.ReferenceCallback) com "arguments": { SchemaProps: spec.SchemaProps{ Description: "Arguments are the parameter and artifact arguments to the template", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Arguments"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Arguments"), }, }, "dependencies": { @@ -363,7 +427,7 @@ func schema_pkg_apis_workflow_v1alpha1_DAGTask(ref common.ReferenceCallback) com Items: &spec.SchemaOrArray{ Schema: &spec.Schema{ SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Item"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Item"), }, }, }, @@ -379,7 +443,7 @@ func schema_pkg_apis_workflow_v1alpha1_DAGTask(ref common.ReferenceCallback) com "withSequence": { SchemaProps: spec.SchemaProps{ Description: "WithSequence expands a task into a numeric sequence", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Sequence"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Sequence"), }, }, "when": { @@ -389,12 +453,18 @@ func schema_pkg_apis_workflow_v1alpha1_DAGTask(ref common.ReferenceCallback) com Format: "", }, }, + "continueOn": { + SchemaProps: spec.SchemaProps{ + Description: "ContinueOn makes argo to proceed with the following step even if this step fails. Errors and Failed states can be specified", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ContinueOn"), + }, + }, }, Required: []string{"name", "template"}, }, }, Dependencies: []string{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Arguments", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Item", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Sequence"}, + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Arguments", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ContinueOn", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Item", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Sequence"}, } } @@ -418,65 +488,429 @@ func schema_pkg_apis_workflow_v1alpha1_DAGTemplate(ref common.ReferenceCallback) Items: &spec.SchemaOrArray{ Schema: &spec.Schema{ SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.DAGTask"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.DAGTask"), }, }, }, }, }, }, - Required: []string{"tasks"}, + Required: []string{"tasks"}, + }, + }, + Dependencies: []string{ + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.DAGTask"}, + } +} + +func schema_pkg_apis_workflow_v1alpha1_ExceptionCondition(ref common.ReferenceCallback) common.OpenAPIDefinition { + return common.OpenAPIDefinition{ + Schema: spec.Schema{ + SchemaProps: spec.SchemaProps{ + Description: "ExceptionCondition is a container for defining an error or warning rule", + Properties: map[string]spec.Schema{ + "name": { + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + "patternMatched": { + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + "patternUnmatched": { + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + "source": { + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + "message": { + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + }, + Required: []string{"name"}, + }, + }, + Dependencies: []string{}, + } +} + +func schema_pkg_apis_workflow_v1alpha1_ExceptionResult(ref common.ReferenceCallback) common.OpenAPIDefinition { + return common.OpenAPIDefinition{ + Schema: spec.Schema{ + SchemaProps: spec.SchemaProps{ + Description: "ExceptionResult contains the results on an extended error or warning condition evaluation", + Properties: map[string]spec.Schema{ + "name": { + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + "message": { + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + "podName": { + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + "stepName": { + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + }, + Required: []string{"name", "message", "podName", "stepName"}, + }, + }, + Dependencies: []string{}, + } +} + +func schema_pkg_apis_workflow_v1alpha1_GCSArtifact(ref common.ReferenceCallback) common.OpenAPIDefinition { + return common.OpenAPIDefinition{ + Schema: spec.Schema{ + SchemaProps: spec.SchemaProps{ + Description: "GCSArtifact is the location of a GCS artifact", + Properties: map[string]spec.Schema{ + "bucket": { + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + "credentialsSecret": { + SchemaProps: spec.SchemaProps{ + Ref: ref("k8s.io/api/core/v1.SecretKeySelector"), + }, + }, + "key": { + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + }, + Required: []string{"bucket", "credentialsSecret", "key"}, + }, + }, + Dependencies: []string{ + "k8s.io/api/core/v1.SecretKeySelector"}, + } +} + +func schema_pkg_apis_workflow_v1alpha1_GCSBucket(ref common.ReferenceCallback) common.OpenAPIDefinition { + return common.OpenAPIDefinition{ + Schema: spec.Schema{ + SchemaProps: spec.SchemaProps{ + Description: "GCSBucket contains the access information required for acting with a GCS bucket", + Properties: map[string]spec.Schema{ + "bucket": { + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + "credentialsSecret": { + SchemaProps: spec.SchemaProps{ + Ref: ref("k8s.io/api/core/v1.SecretKeySelector"), + }, + }, + }, + Required: []string{"bucket", "credentialsSecret"}, + }, + }, + Dependencies: []string{ + "k8s.io/api/core/v1.SecretKeySelector"}, + } +} + +func schema_pkg_apis_workflow_v1alpha1_GitArtifact(ref common.ReferenceCallback) common.OpenAPIDefinition { + return common.OpenAPIDefinition{ + Schema: spec.Schema{ + SchemaProps: spec.SchemaProps{ + Description: "GitArtifact is the location of an git artifact", + Properties: map[string]spec.Schema{ + "repo": { + SchemaProps: spec.SchemaProps{ + Description: "Repo is the git repository", + Type: []string{"string"}, + Format: "", + }, + }, + "revision": { + SchemaProps: spec.SchemaProps{ + Description: "Revision is the git commit, tag, branch to checkout", + Type: []string{"string"}, + Format: "", + }, + }, + "usernameSecret": { + SchemaProps: spec.SchemaProps{ + Description: "UsernameSecret is the secret selector to the repository username", + Ref: ref("k8s.io/api/core/v1.SecretKeySelector"), + }, + }, + "passwordSecret": { + SchemaProps: spec.SchemaProps{ + Description: "PasswordSecret is the secret selector to the repository password", + Ref: ref("k8s.io/api/core/v1.SecretKeySelector"), + }, + }, + "sshPrivateKeySecret": { + SchemaProps: spec.SchemaProps{ + Description: "SSHPrivateKeySecret is the secret selector to the repository ssh private key", + Ref: ref("k8s.io/api/core/v1.SecretKeySelector"), + }, + }, + "insecureIgnoreHostKey": { + SchemaProps: spec.SchemaProps{ + Description: "InsecureIgnoreHostKey disables SSH strict host key checking during git clone", + Type: []string{"boolean"}, + Format: "", + }, + }, + }, + Required: []string{"repo"}, + }, + }, + Dependencies: []string{ + "k8s.io/api/core/v1.SecretKeySelector"}, + } +} + +func schema_pkg_apis_workflow_v1alpha1_HDFSArtifact(ref common.ReferenceCallback) common.OpenAPIDefinition { + return common.OpenAPIDefinition{ + Schema: spec.Schema{ + SchemaProps: spec.SchemaProps{ + Description: "HDFSArtifact is the location of an HDFS artifact", + Properties: map[string]spec.Schema{ + "krbCCacheSecret": { + SchemaProps: spec.SchemaProps{ + Description: "KrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos.", + Ref: ref("k8s.io/api/core/v1.SecretKeySelector"), + }, + }, + "krbKeytabSecret": { + SchemaProps: spec.SchemaProps{ + Description: "KrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos.", + Ref: ref("k8s.io/api/core/v1.SecretKeySelector"), + }, + }, + "krbUsername": { + SchemaProps: spec.SchemaProps{ + Description: "KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used.", + Type: []string{"string"}, + Format: "", + }, + }, + "krbRealm": { + SchemaProps: spec.SchemaProps{ + Description: "KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used.", + Type: []string{"string"}, + Format: "", + }, + }, + "krbConfigConfigMap": { + SchemaProps: spec.SchemaProps{ + Description: "KrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used.", + Ref: ref("k8s.io/api/core/v1.ConfigMapKeySelector"), + }, + }, + "krbServicePrincipalName": { + SchemaProps: spec.SchemaProps{ + Description: "KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used.", + Type: []string{"string"}, + Format: "", + }, + }, + "addresses": { + SchemaProps: spec.SchemaProps{ + Description: "Addresses is accessible addresses of HDFS name nodes", + Type: []string{"array"}, + Items: &spec.SchemaOrArray{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + }, + }, + }, + "hdfsUser": { + SchemaProps: spec.SchemaProps{ + Description: "HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used.", + Type: []string{"string"}, + Format: "", + }, + }, + "path": { + SchemaProps: spec.SchemaProps{ + Description: "Path is a file path in HDFS", + Type: []string{"string"}, + Format: "", + }, + }, + "force": { + SchemaProps: spec.SchemaProps{ + Description: "Force copies a file forcibly even if it exists (default: false)", + Type: []string{"boolean"}, + Format: "", + }, + }, + }, + Required: []string{"addresses", "path"}, + }, + }, + Dependencies: []string{ + "k8s.io/api/core/v1.ConfigMapKeySelector", "k8s.io/api/core/v1.SecretKeySelector"}, + } +} + +func schema_pkg_apis_workflow_v1alpha1_HDFSConfig(ref common.ReferenceCallback) common.OpenAPIDefinition { + return common.OpenAPIDefinition{ + Schema: spec.Schema{ + SchemaProps: spec.SchemaProps{ + Description: "HDFSConfig is configurations for HDFS", + Properties: map[string]spec.Schema{ + "krbCCacheSecret": { + SchemaProps: spec.SchemaProps{ + Description: "KrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos.", + Ref: ref("k8s.io/api/core/v1.SecretKeySelector"), + }, + }, + "krbKeytabSecret": { + SchemaProps: spec.SchemaProps{ + Description: "KrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos.", + Ref: ref("k8s.io/api/core/v1.SecretKeySelector"), + }, + }, + "krbUsername": { + SchemaProps: spec.SchemaProps{ + Description: "KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used.", + Type: []string{"string"}, + Format: "", + }, + }, + "krbRealm": { + SchemaProps: spec.SchemaProps{ + Description: "KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used.", + Type: []string{"string"}, + Format: "", + }, + }, + "krbConfigConfigMap": { + SchemaProps: spec.SchemaProps{ + Description: "KrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used.", + Ref: ref("k8s.io/api/core/v1.ConfigMapKeySelector"), + }, + }, + "krbServicePrincipalName": { + SchemaProps: spec.SchemaProps{ + Description: "KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used.", + Type: []string{"string"}, + Format: "", + }, + }, + "addresses": { + SchemaProps: spec.SchemaProps{ + Description: "Addresses is accessible addresses of HDFS name nodes", + Type: []string{"array"}, + Items: &spec.SchemaOrArray{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + }, + }, + }, + "hdfsUser": { + SchemaProps: spec.SchemaProps{ + Description: "HDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used.", + Type: []string{"string"}, + Format: "", + }, + }, + }, + Required: []string{"addresses"}, }, }, Dependencies: []string{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.DAGTask"}, + "k8s.io/api/core/v1.ConfigMapKeySelector", "k8s.io/api/core/v1.SecretKeySelector"}, } } -func schema_pkg_apis_workflow_v1alpha1_GitArtifact(ref common.ReferenceCallback) common.OpenAPIDefinition { +func schema_pkg_apis_workflow_v1alpha1_HDFSKrbConfig(ref common.ReferenceCallback) common.OpenAPIDefinition { return common.OpenAPIDefinition{ Schema: spec.Schema{ SchemaProps: spec.SchemaProps{ - Description: "GitArtifact is the location of an git artifact", + Description: "HDFSKrbConfig is auth configurations for Kerberos", Properties: map[string]spec.Schema{ - "repo": { + "krbCCacheSecret": { SchemaProps: spec.SchemaProps{ - Description: "Repo is the git repository", - Type: []string{"string"}, - Format: "", + Description: "KrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos.", + Ref: ref("k8s.io/api/core/v1.SecretKeySelector"), }, }, - "revision": { + "krbKeytabSecret": { SchemaProps: spec.SchemaProps{ - Description: "Revision is the git commit, tag, branch to checkout", + Description: "KrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos.", + Ref: ref("k8s.io/api/core/v1.SecretKeySelector"), + }, + }, + "krbUsername": { + SchemaProps: spec.SchemaProps{ + Description: "KrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used.", Type: []string{"string"}, Format: "", }, }, - "usernameSecret": { + "krbRealm": { SchemaProps: spec.SchemaProps{ - Description: "UsernameSecret is the secret selector to the repository username", - Ref: ref("k8s.io/api/core/v1.SecretKeySelector"), + Description: "KrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used.", + Type: []string{"string"}, + Format: "", }, }, - "passwordSecret": { + "krbConfigConfigMap": { SchemaProps: spec.SchemaProps{ - Description: "PasswordSecret is the secret selector to the repository password", - Ref: ref("k8s.io/api/core/v1.SecretKeySelector"), + Description: "KrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used.", + Ref: ref("k8s.io/api/core/v1.ConfigMapKeySelector"), }, }, - "sshPrivateKeySecret": { + "krbServicePrincipalName": { SchemaProps: spec.SchemaProps{ - Description: "SSHPrivateKeySecret is the secret selector to the repository ssh private key", - Ref: ref("k8s.io/api/core/v1.SecretKeySelector"), + Description: "KrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used.", + Type: []string{"string"}, + Format: "", }, }, }, - Required: []string{"repo"}, }, }, Dependencies: []string{ - "k8s.io/api/core/v1.SecretKeySelector"}, + "k8s.io/api/core/v1.ConfigMapKeySelector", "k8s.io/api/core/v1.SecretKeySelector"}, } } @@ -514,7 +948,7 @@ func schema_pkg_apis_workflow_v1alpha1_Inputs(ref common.ReferenceCallback) comm Items: &spec.SchemaOrArray{ Schema: &spec.Schema{ SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Parameter"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Parameter"), }, }, }, @@ -527,7 +961,7 @@ func schema_pkg_apis_workflow_v1alpha1_Inputs(ref common.ReferenceCallback) comm Items: &spec.SchemaOrArray{ Schema: &spec.Schema{ SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Artifact"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Artifact"), }, }, }, @@ -537,7 +971,7 @@ func schema_pkg_apis_workflow_v1alpha1_Inputs(ref common.ReferenceCallback) comm }, }, Dependencies: []string{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Artifact", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Parameter"}, + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Artifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Parameter"}, } } @@ -617,7 +1051,7 @@ func schema_pkg_apis_workflow_v1alpha1_Outputs(ref common.ReferenceCallback) com Items: &spec.SchemaOrArray{ Schema: &spec.Schema{ SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Parameter"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Parameter"), }, }, }, @@ -630,7 +1064,7 @@ func schema_pkg_apis_workflow_v1alpha1_Outputs(ref common.ReferenceCallback) com Items: &spec.SchemaOrArray{ Schema: &spec.Schema{ SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Artifact"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Artifact"), }, }, }, @@ -647,7 +1081,7 @@ func schema_pkg_apis_workflow_v1alpha1_Outputs(ref common.ReferenceCallback) com }, }, Dependencies: []string{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Artifact", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Parameter"}, + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Artifact", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Parameter"}, } } @@ -681,7 +1115,7 @@ func schema_pkg_apis_workflow_v1alpha1_Parameter(ref common.ReferenceCallback) c "valueFrom": { SchemaProps: spec.SchemaProps{ Description: "ValueFrom is the source for the output parameter's value", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ValueFrom"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ValueFrom"), }, }, "globalName": { @@ -696,7 +1130,7 @@ func schema_pkg_apis_workflow_v1alpha1_Parameter(ref common.ReferenceCallback) c }, }, Dependencies: []string{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ValueFrom"}, + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ValueFrom"}, } } @@ -734,6 +1168,13 @@ func schema_pkg_apis_workflow_v1alpha1_ResourceTemplate(ref common.ReferenceCall Format: "", }, }, + "mergeStrategy": { + SchemaProps: spec.SchemaProps{ + Description: "MergeStrategy is the strategy used to merge a patch. It defaults to \"strategic\" Must be one of: strategic, merge, json", + Type: []string{"string"}, + Format: "", + }, + }, "manifest": { SchemaProps: spec.SchemaProps{ Description: "Manifest contains the kubernetes manifest", @@ -1008,7 +1449,7 @@ func schema_pkg_apis_workflow_v1alpha1_ScriptTemplate(ref common.ReferenceCallba }, "resources": { SchemaProps: spec.SchemaProps{ - Description: "Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/", + Description: "Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources", Ref: ref("k8s.io/api/core/v1.ResourceRequirements"), }, }, @@ -1140,44 +1581,308 @@ func schema_pkg_apis_workflow_v1alpha1_Sequence(ref common.ReferenceCallback) co Properties: map[string]spec.Schema{ "count": { SchemaProps: spec.SchemaProps{ - Description: "Count is number of elements in the sequence (default: 0). Not to be used with end", + Description: "Count is number of elements in the sequence (default: 0). Not to be used with end", + Type: []string{"string"}, + Format: "", + }, + }, + "start": { + SchemaProps: spec.SchemaProps{ + Description: "Number at which to start the sequence (default: 0)", + Type: []string{"string"}, + Format: "", + }, + }, + "end": { + SchemaProps: spec.SchemaProps{ + Description: "Number at which to end the sequence (default: 0). Not to be used with Count", + Type: []string{"string"}, + Format: "", + }, + }, + "format": { + SchemaProps: spec.SchemaProps{ + Description: "Format is a printf format string to format the value in the sequence", + Type: []string{"string"}, + Format: "", + }, + }, + }, + }, + }, + Dependencies: []string{}, + } +} + +func schema_pkg_apis_workflow_v1alpha1_SuspendTemplate(ref common.ReferenceCallback) common.OpenAPIDefinition { + return common.OpenAPIDefinition{ + Schema: spec.Schema{ + SchemaProps: spec.SchemaProps{ + Description: "SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time", + Properties: map[string]spec.Schema{}, + }, + }, + Dependencies: []string{}, + } +} + +func schema_pkg_apis_workflow_v1alpha1_TarStrategy(ref common.ReferenceCallback) common.OpenAPIDefinition { + return common.OpenAPIDefinition{ + Schema: spec.Schema{ + SchemaProps: spec.SchemaProps{ + Description: "TarStrategy will tar and gzip the file or directory when saving", + Properties: map[string]spec.Schema{}, + }, + }, + Dependencies: []string{}, + } +} + +func schema_pkg_apis_workflow_v1alpha1_Template(ref common.ReferenceCallback) common.OpenAPIDefinition { + return common.OpenAPIDefinition{ + Schema: spec.Schema{ + SchemaProps: spec.SchemaProps{ + Description: "Template is a reusable and composable unit of execution in a workflow", + Properties: map[string]spec.Schema{ + "name": { + SchemaProps: spec.SchemaProps{ + Description: "Name is the name of the template", + Type: []string{"string"}, + Format: "", + }, + }, + "inputs": { + SchemaProps: spec.SchemaProps{ + Description: "Inputs describe what inputs parameters and artifacts are supplied to this template", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Inputs"), + }, + }, + "outputs": { + SchemaProps: spec.SchemaProps{ + Description: "Outputs describe the parameters and artifacts that this template produces", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Outputs"), + }, + }, + "nodeSelector": { + SchemaProps: spec.SchemaProps{ + Description: "NodeSelector is a selector to schedule this step of the workflow to be run on the selected node(s). Overrides the selector set at the workflow level.", + Type: []string{"object"}, + AdditionalProperties: &spec.SchemaOrBool{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Type: []string{"string"}, + Format: "", + }, + }, + }, + }, + }, + "affinity": { + SchemaProps: spec.SchemaProps{ + Description: "Affinity sets the pod's scheduling constraints Overrides the affinity set at the workflow level (if any)", + Ref: ref("k8s.io/api/core/v1.Affinity"), + }, + }, + "metadata": { + SchemaProps: spec.SchemaProps{ + Description: "Metdata sets the pods's metadata, i.e. annotations and labels", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Metadata"), + }, + }, + "daemon": { + SchemaProps: spec.SchemaProps{ + Description: "Deamon will allow a workflow to proceed to the next step so long as the container reaches readiness", + Type: []string{"boolean"}, + Format: "", + }, + }, + "steps": { + SchemaProps: spec.SchemaProps{ + Description: "Steps define a series of sequential/parallel workflow steps", + Type: []string{"array"}, + Items: &spec.SchemaOrArray{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Type: []string{"array"}, + Items: &spec.SchemaOrArray{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.WorkflowStep"), + }, + }, + }, + }, + }, + }, + }, + }, + "container": { + SchemaProps: spec.SchemaProps{ + Description: "Container is the main container image to run in the pod", + Ref: ref("k8s.io/api/core/v1.Container"), + }, + }, + "script": { + SchemaProps: spec.SchemaProps{ + Description: "Script runs a portion of code against an interpreter", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ScriptTemplate"), + }, + }, + "resource": { + SchemaProps: spec.SchemaProps{ + Description: "Resource template subtype which can run k8s resources", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ResourceTemplate"), + }, + }, + "dag": { + SchemaProps: spec.SchemaProps{ + Description: "DAG template subtype which runs a DAG", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.DAGTemplate"), + }, + }, + "suspend": { + SchemaProps: spec.SchemaProps{ + Description: "Suspend template subtype which can suspend a workflow when reaching the step", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.SuspendTemplate"), + }, + }, + "volumes": { + SchemaProps: spec.SchemaProps{ + Description: "Volumes is a list of volumes that can be mounted by containers in a template.", + Type: []string{"array"}, + Items: &spec.SchemaOrArray{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Ref: ref("k8s.io/api/core/v1.Volume"), + }, + }, + }, + }, + }, + "initContainers": { + SchemaProps: spec.SchemaProps{ + Description: "InitContainers is a list of containers which run before the main container.", + Type: []string{"array"}, + Items: &spec.SchemaOrArray{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.UserContainer"), + }, + }, + }, + }, + }, + "sidecars": { + SchemaProps: spec.SchemaProps{ + Description: "Sidecars is a list of containers which run alongside the main container Sidecars are automatically killed when the main container completes", + Type: []string{"array"}, + Items: &spec.SchemaOrArray{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.UserContainer"), + }, + }, + }, + }, + }, + "archiveLocation": { + SchemaProps: spec.SchemaProps{ + Description: "Location in which all files related to the step will be stored (logs, artifacts, etc...). Can be overridden by individual items in Outputs. If omitted, will use the default artifact repository location configured in the controller, appended with the / in the key.", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ArtifactLocation"), + }, + }, + "activeDeadlineSeconds": { + SchemaProps: spec.SchemaProps{ + Description: "Optional duration in seconds relative to the StartTime that the pod may be active on a node before the system actively tries to terminate the pod; value must be positive integer This field is only applicable to container and script templates.", + Type: []string{"integer"}, + Format: "int64", + }, + }, + "retryStrategy": { + SchemaProps: spec.SchemaProps{ + Description: "RetryStrategy describes how to retry a template when it fails", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.RetryStrategy"), + }, + }, + "parallelism": { + SchemaProps: spec.SchemaProps{ + Description: "Parallelism limits the max total parallel pods that can execute at the same time within the boundaries of this template invocation. If additional steps/dag templates are invoked, the pods created by those templates will not be counted towards this total.", + Type: []string{"integer"}, + Format: "int64", + }, + }, + "tolerations": { + SchemaProps: spec.SchemaProps{ + Description: "Tolerations to apply to workflow pods.", + Type: []string{"array"}, + Items: &spec.SchemaOrArray{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Ref: ref("k8s.io/api/core/v1.Toleration"), + }, + }, + }, + }, + }, + "schedulerName": { + SchemaProps: spec.SchemaProps{ + Description: "If specified, the pod will be dispatched by specified scheduler. Or it will be dispatched by workflow scope scheduler if specified. If neither specified, the pod will be dispatched by default scheduler.", Type: []string{"string"}, Format: "", }, }, - "start": { + "priorityClassName": { SchemaProps: spec.SchemaProps{ - Description: "Number at which to start the sequence (default: 0)", + Description: "PriorityClassName to apply to workflow pods.", Type: []string{"string"}, Format: "", }, }, - "end": { + "priority": { SchemaProps: spec.SchemaProps{ - Description: "Number at which to end the sequence (default: 0). Not to be used with Count", - Type: []string{"string"}, - Format: "", + Description: "Priority to apply to workflow pods.", + Type: []string{"integer"}, + Format: "int32", }, }, - "format": { + "errors": { SchemaProps: spec.SchemaProps{ - Description: "Format is a printf format string to format the value in the sequence", - Type: []string{"string"}, - Format: "", + Type: []string{"array"}, + Items: &spec.SchemaOrArray{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ExceptionCondition"), + }, + }, + }, + }, + }, + "warnings": { + SchemaProps: spec.SchemaProps{ + Type: []string{"array"}, + Items: &spec.SchemaOrArray{ + Schema: &spec.Schema{ + SchemaProps: spec.SchemaProps{ + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ExceptionCondition"), + }, + }, + }, }, }, }, + Required: []string{"name"}, }, }, - Dependencies: []string{}, + Dependencies: []string{ + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ArtifactLocation", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.DAGTemplate", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ExceptionCondition", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Inputs", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Metadata", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Outputs", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ResourceTemplate", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.RetryStrategy", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ScriptTemplate", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.SuspendTemplate", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.UserContainer", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.WorkflowStep", "k8s.io/api/core/v1.Affinity", "k8s.io/api/core/v1.Container", "k8s.io/api/core/v1.Toleration", "k8s.io/api/core/v1.Volume"}, } } -func schema_pkg_apis_workflow_v1alpha1_Sidecar(ref common.ReferenceCallback) common.OpenAPIDefinition { +func schema_pkg_apis_workflow_v1alpha1_UserContainer(ref common.ReferenceCallback) common.OpenAPIDefinition { return common.OpenAPIDefinition{ Schema: spec.Schema{ SchemaProps: spec.SchemaProps{ - Description: "Sidecar is a container which runs alongside the main container", + Description: "UserContainer is a container specified by a user.", Properties: map[string]spec.Schema{ "name": { SchemaProps: spec.SchemaProps{ @@ -1281,7 +1986,7 @@ func schema_pkg_apis_workflow_v1alpha1_Sidecar(ref common.ReferenceCallback) com }, "resources": { SchemaProps: spec.SchemaProps{ - Description: "Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/", + Description: "Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources", Ref: ref("k8s.io/api/core/v1.ResourceRequirements"), }, }, @@ -1391,7 +2096,7 @@ func schema_pkg_apis_workflow_v1alpha1_Sidecar(ref common.ReferenceCallback) com }, "mirrorVolumeMounts": { SchemaProps: spec.SchemaProps{ - Description: "MirrorVolumeMounts will mount the same volumes specified in the main container to the sidecar (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding", + Description: "MirrorVolumeMounts will mount the same volumes specified in the main container to the container (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding", Type: []string{"boolean"}, Format: "", }, @@ -1405,199 +2110,6 @@ func schema_pkg_apis_workflow_v1alpha1_Sidecar(ref common.ReferenceCallback) com } } -func schema_pkg_apis_workflow_v1alpha1_SuspendTemplate(ref common.ReferenceCallback) common.OpenAPIDefinition { - return common.OpenAPIDefinition{ - Schema: spec.Schema{ - SchemaProps: spec.SchemaProps{ - Description: "SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time", - Properties: map[string]spec.Schema{}, - }, - }, - Dependencies: []string{}, - } -} - -func schema_pkg_apis_workflow_v1alpha1_TarStrategy(ref common.ReferenceCallback) common.OpenAPIDefinition { - return common.OpenAPIDefinition{ - Schema: spec.Schema{ - SchemaProps: spec.SchemaProps{ - Description: "TarStrategy will tar and gzip the file or directory when saving", - Properties: map[string]spec.Schema{}, - }, - }, - Dependencies: []string{}, - } -} - -func schema_pkg_apis_workflow_v1alpha1_Template(ref common.ReferenceCallback) common.OpenAPIDefinition { - return common.OpenAPIDefinition{ - Schema: spec.Schema{ - SchemaProps: spec.SchemaProps{ - Description: "Template is a reusable and composable unit of execution in a workflow", - Properties: map[string]spec.Schema{ - "name": { - SchemaProps: spec.SchemaProps{ - Description: "Name is the name of the template", - Type: []string{"string"}, - Format: "", - }, - }, - "inputs": { - SchemaProps: spec.SchemaProps{ - Description: "Inputs describe what inputs parameters and artifacts are supplied to this template", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Inputs"), - }, - }, - "outputs": { - SchemaProps: spec.SchemaProps{ - Description: "Outputs describe the parameters and artifacts that this template produces", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Outputs"), - }, - }, - "nodeSelector": { - SchemaProps: spec.SchemaProps{ - Description: "NodeSelector is a selector to schedule this step of the workflow to be run on the selected node(s). Overrides the selector set at the workflow level.", - Type: []string{"object"}, - AdditionalProperties: &spec.SchemaOrBool{ - Schema: &spec.Schema{ - SchemaProps: spec.SchemaProps{ - Type: []string{"string"}, - Format: "", - }, - }, - }, - }, - }, - "affinity": { - SchemaProps: spec.SchemaProps{ - Description: "Affinity sets the pod's scheduling constraints Overrides the affinity set at the workflow level (if any)", - Ref: ref("k8s.io/api/core/v1.Affinity"), - }, - }, - "metadata": { - SchemaProps: spec.SchemaProps{ - Description: "Metdata sets the pods's metadata, i.e. annotations and labels", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Metadata"), - }, - }, - "daemon": { - SchemaProps: spec.SchemaProps{ - Description: "Deamon will allow a workflow to proceed to the next step so long as the container reaches readiness", - Type: []string{"boolean"}, - Format: "", - }, - }, - "steps": { - SchemaProps: spec.SchemaProps{ - Description: "Steps define a series of sequential/parallel workflow steps", - Type: []string{"array"}, - Items: &spec.SchemaOrArray{ - Schema: &spec.Schema{ - SchemaProps: spec.SchemaProps{ - Type: []string{"array"}, - Items: &spec.SchemaOrArray{ - Schema: &spec.Schema{ - SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.WorkflowStep"), - }, - }, - }, - }, - }, - }, - }, - }, - "container": { - SchemaProps: spec.SchemaProps{ - Description: "Container is the main container image to run in the pod", - Ref: ref("k8s.io/api/core/v1.Container"), - }, - }, - "script": { - SchemaProps: spec.SchemaProps{ - Description: "Script runs a portion of code against an interpreter", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ScriptTemplate"), - }, - }, - "resource": { - SchemaProps: spec.SchemaProps{ - Description: "Resource template subtype which can run k8s resources", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ResourceTemplate"), - }, - }, - "dag": { - SchemaProps: spec.SchemaProps{ - Description: "DAG template subtype which runs a DAG", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.DAGTemplate"), - }, - }, - "suspend": { - SchemaProps: spec.SchemaProps{ - Description: "Suspend template subtype which can suspend a workflow when reaching the step", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.SuspendTemplate"), - }, - }, - "sidecars": { - SchemaProps: spec.SchemaProps{ - Description: "Sidecars is a list of containers which run alongside the main container Sidecars are automatically killed when the main container completes", - Type: []string{"array"}, - Items: &spec.SchemaOrArray{ - Schema: &spec.Schema{ - SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Sidecar"), - }, - }, - }, - }, - }, - "archiveLocation": { - SchemaProps: spec.SchemaProps{ - Description: "Location in which all files related to the step will be stored (logs, artifacts, etc...). Can be overridden by individual items in Outputs. If omitted, will use the default artifact repository location configured in the controller, appended with the / in the key.", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ArtifactLocation"), - }, - }, - "activeDeadlineSeconds": { - SchemaProps: spec.SchemaProps{ - Description: "Optional duration in seconds relative to the StartTime that the pod may be active on a node before the system actively tries to terminate the pod; value must be positive integer This field is only applicable to container and script templates.", - Type: []string{"integer"}, - Format: "int64", - }, - }, - "retryStrategy": { - SchemaProps: spec.SchemaProps{ - Description: "RetryStrategy describes how to retry a template when it fails", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.RetryStrategy"), - }, - }, - "parallelism": { - SchemaProps: spec.SchemaProps{ - Description: "Parallelism limits the max total parallel pods that can execute at the same time within the boundaries of this template invocation. If additional steps/dag templates are invoked, the pods created by those templates will not be counted towards this total.", - Type: []string{"integer"}, - Format: "int64", - }, - }, - "tolerations": { - SchemaProps: spec.SchemaProps{ - Description: "Tolerations to apply to workflow pods.", - Type: []string{"array"}, - Items: &spec.SchemaOrArray{ - Schema: &spec.Schema{ - SchemaProps: spec.SchemaProps{ - Ref: ref("k8s.io/api/core/v1.Toleration"), - }, - }, - }, - }, - }, - }, - Required: []string{"name"}, - }, - }, - Dependencies: []string{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ArtifactLocation", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.DAGTemplate", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Inputs", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Metadata", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Outputs", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ResourceTemplate", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.RetryStrategy", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.ScriptTemplate", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Sidecar", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.SuspendTemplate", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.WorkflowStep", "k8s.io/api/core/v1.Affinity", "k8s.io/api/core/v1.Container", "k8s.io/api/core/v1.Toleration"}, - } -} - func schema_pkg_apis_workflow_v1alpha1_ValueFrom(ref common.ReferenceCallback) common.OpenAPIDefinition { return common.OpenAPIDefinition{ Schema: spec.Schema{ @@ -1666,12 +2178,12 @@ func schema_pkg_apis_workflow_v1alpha1_Workflow(ref common.ReferenceCallback) co }, "spec": { SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.WorkflowSpec"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.WorkflowSpec"), }, }, "status": { SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.WorkflowStatus"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.WorkflowStatus"), }, }, }, @@ -1679,7 +2191,7 @@ func schema_pkg_apis_workflow_v1alpha1_Workflow(ref common.ReferenceCallback) co }, }, Dependencies: []string{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.WorkflowSpec", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.WorkflowStatus", "k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta"}, + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.WorkflowSpec", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.WorkflowStatus", "k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta"}, } } @@ -1714,7 +2226,7 @@ func schema_pkg_apis_workflow_v1alpha1_WorkflowList(ref common.ReferenceCallback Items: &spec.SchemaOrArray{ Schema: &spec.Schema{ SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Workflow"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Workflow"), }, }, }, @@ -1725,7 +2237,7 @@ func schema_pkg_apis_workflow_v1alpha1_WorkflowList(ref common.ReferenceCallback }, }, Dependencies: []string{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Workflow", "k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta"}, + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Workflow", "k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta"}, } } @@ -1742,7 +2254,7 @@ func schema_pkg_apis_workflow_v1alpha1_WorkflowSpec(ref common.ReferenceCallback Items: &spec.SchemaOrArray{ Schema: &spec.Schema{ SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Template"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Template"), }, }, }, @@ -1758,7 +2270,7 @@ func schema_pkg_apis_workflow_v1alpha1_WorkflowSpec(ref common.ReferenceCallback "arguments": { SchemaProps: spec.SchemaProps{ Description: "Arguments contain the parameters and artifacts sent to the workflow entrypoint Parameters are referencable globally using the 'workflow' variable prefix. e.g. {{workflow.parameters.myparam}}", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Arguments"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Arguments"), }, }, "serviceAccountName": { @@ -1854,6 +2366,26 @@ func schema_pkg_apis_workflow_v1alpha1_WorkflowSpec(ref common.ReferenceCallback }, }, }, + "hostNetwork": { + SchemaProps: spec.SchemaProps{ + Description: "Host networking requested for this workflow pod. Default to false.", + Type: []string{"boolean"}, + Format: "", + }, + }, + "dnsPolicy": { + SchemaProps: spec.SchemaProps{ + Description: "Set DNS policy for the pod. Defaults to \"ClusterFirst\". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'.", + Type: []string{"string"}, + Format: "", + }, + }, + "dnsConfig": { + SchemaProps: spec.SchemaProps{ + Description: "PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy.", + Ref: ref("k8s.io/api/core/v1.PodDNSConfig"), + }, + }, "onExit": { SchemaProps: spec.SchemaProps{ Description: "OnExit is a template reference which is invoked at the end of the workflow, irrespective of the success, failure, or error of the primary workflow.", @@ -1882,12 +2414,33 @@ func schema_pkg_apis_workflow_v1alpha1_WorkflowSpec(ref common.ReferenceCallback Format: "int32", }, }, + "schedulerName": { + SchemaProps: spec.SchemaProps{ + Description: "Set scheduler name for all pods. Will be overridden if container/script template's scheduler name is set. Default scheduler will be used if neither specified.", + Type: []string{"string"}, + Format: "", + }, + }, + "podPriorityClassName": { + SchemaProps: spec.SchemaProps{ + Description: "PriorityClassName to apply to workflow pods.", + Type: []string{"string"}, + Format: "", + }, + }, + "podPriority": { + SchemaProps: spec.SchemaProps{ + Description: "Priority to apply to workflow pods.", + Type: []string{"integer"}, + Format: "int32", + }, + }, }, Required: []string{"templates", "entrypoint"}, }, }, Dependencies: []string{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Arguments", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Template", "k8s.io/api/core/v1.Affinity", "k8s.io/api/core/v1.LocalObjectReference", "k8s.io/api/core/v1.PersistentVolumeClaim", "k8s.io/api/core/v1.Toleration", "k8s.io/api/core/v1.Volume"}, + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Arguments", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Template", "k8s.io/api/core/v1.Affinity", "k8s.io/api/core/v1.LocalObjectReference", "k8s.io/api/core/v1.PersistentVolumeClaim", "k8s.io/api/core/v1.PodDNSConfig", "k8s.io/api/core/v1.Toleration", "k8s.io/api/core/v1.Volume"}, } } @@ -1914,7 +2467,7 @@ func schema_pkg_apis_workflow_v1alpha1_WorkflowStep(ref common.ReferenceCallback "arguments": { SchemaProps: spec.SchemaProps{ Description: "Arguments hold arguments to the template", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Arguments"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Arguments"), }, }, "withItems": { @@ -1924,7 +2477,7 @@ func schema_pkg_apis_workflow_v1alpha1_WorkflowStep(ref common.ReferenceCallback Items: &spec.SchemaOrArray{ Schema: &spec.Schema{ SchemaProps: spec.SchemaProps{ - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Item"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Item"), }, }, }, @@ -1940,7 +2493,7 @@ func schema_pkg_apis_workflow_v1alpha1_WorkflowStep(ref common.ReferenceCallback "withSequence": { SchemaProps: spec.SchemaProps{ Description: "WithSequence expands a step into a numeric sequence", - Ref: ref("github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Sequence"), + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Sequence"), }, }, "when": { @@ -1950,10 +2503,16 @@ func schema_pkg_apis_workflow_v1alpha1_WorkflowStep(ref common.ReferenceCallback Format: "", }, }, + "continueOn": { + SchemaProps: spec.SchemaProps{ + Description: "ContinueOn makes argo to proceed with the following step even if this step fails. Errors and Failed states can be specified", + Ref: ref("github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ContinueOn"), + }, + }, }, }, }, Dependencies: []string{ - "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Arguments", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Item", "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1.Sequence"}, + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Arguments", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.ContinueOn", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Item", "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1.Sequence"}, } } diff --git a/pkg/apis/workflow/v1alpha1/register.go b/pkg/apis/workflow/v1alpha1/register.go index 7ee48cdfc525..e3928a351448 100644 --- a/pkg/apis/workflow/v1alpha1/register.go +++ b/pkg/apis/workflow/v1alpha1/register.go @@ -1,7 +1,7 @@ package v1alpha1 import ( - "github.com/argoproj/argo/pkg/apis/workflow" + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" diff --git a/pkg/apis/workflow/v1alpha1/types.go b/pkg/apis/workflow/v1alpha1/types.go index fe2c99073341..3d0ff8cffed4 100644 --- a/pkg/apis/workflow/v1alpha1/types.go +++ b/pkg/apis/workflow/v1alpha1/types.go @@ -4,6 +4,7 @@ import ( "encoding/json" "fmt" "hash/fnv" + "strings" apiv1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" @@ -117,6 +118,21 @@ type WorkflowSpec struct { // More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod ImagePullSecrets []apiv1.LocalObjectReference `json:"imagePullSecrets,omitempty"` + // Host networking requested for this workflow pod. Default to false. + HostNetwork *bool `json:"hostNetwork,omitempty"` + + // Set DNS policy for the pod. + // Defaults to "ClusterFirst". + // Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. + // DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. + // To have DNS options set along with hostNetwork, you have to specify DNS policy + // explicitly to 'ClusterFirstWithHostNet'. + DNSPolicy *apiv1.DNSPolicy `json:"dnsPolicy,omitempty"` + + // PodDNSConfig defines the DNS parameters of a pod in addition to + // those generated from DNSPolicy. + DNSConfig *apiv1.PodDNSConfig `json:"dnsConfig,omitempty"` + // OnExit is a template reference which is invoked at the end of the // workflow, irrespective of the success, failure, or error of the // primary workflow. @@ -133,8 +149,21 @@ type WorkflowSpec struct { // allowed to run before the controller terminates the workflow. A value of zero is used to // terminate a Running workflow ActiveDeadlineSeconds *int64 `json:"activeDeadlineSeconds,omitempty"` + // Priority is used if controller is configured to process limited number of workflows in parallel. Workflows with higher priority are processed first. Priority *int32 `json:"priority,omitempty"` + + // Set scheduler name for all pods. + // Will be overridden if container/script template's scheduler name is set. + // Default scheduler will be used if neither specified. + // +optional + SchedulerName string `json:"schedulerName,omitempty"` + + // PriorityClassName to apply to workflow pods. + PodPriorityClassName string `json:"podPriorityClassName,omitempty"` + + // Priority to apply to workflow pods. + PodPriority *int32 `json:"podPriority,omitempty"` } // Template is a reusable and composable unit of execution in a workflow @@ -180,9 +209,15 @@ type Template struct { // Suspend template subtype which can suspend a workflow when reaching the step Suspend *SuspendTemplate `json:"suspend,omitempty"` + // Volumes is a list of volumes that can be mounted by containers in a template. + Volumes []apiv1.Volume `json:"volumes,omitempty"` + + // InitContainers is a list of containers which run before the main container. + InitContainers []UserContainer `json:"initContainers,omitempty"` + // Sidecars is a list of containers which run alongside the main container // Sidecars are automatically killed when the main container completes - Sidecars []Sidecar `json:"sidecars,omitempty"` + Sidecars []UserContainer `json:"sidecars,omitempty"` // Location in which all files related to the step will be stored (logs, artifacts, etc...). // Can be overridden by individual items in Outputs. If omitted, will use the default @@ -205,6 +240,21 @@ type Template struct { // Tolerations to apply to workflow pods. Tolerations []apiv1.Toleration `json:"tolerations,omitempty"` + + // If specified, the pod will be dispatched by specified scheduler. + // Or it will be dispatched by workflow scope scheduler if specified. + // If neither specified, the pod will be dispatched by default scheduler. + // +optional + SchedulerName string `json:"schedulerName,omitempty"` + + // PriorityClassName to apply to workflow pods. + PriorityClassName string `json:"priorityClassName,omitempty"` + + // Priority to apply to workflow pods. + Priority *int32 `json:"priority,omitempty"` + + Errors []ExceptionCondition `json:"errors,omitempty"` + Warnings []ExceptionCondition `json:"warnings,omitempty"` } // Inputs are the mechanism for passing parameters, artifacts, volumes from one template to another @@ -282,6 +332,9 @@ type Artifact struct { // Archive controls how the artifact will be saved to the artifact repository. Archive *ArchiveStrategy `json:"archive,omitempty"` + + // Make Artifacts optional, if Artifacts doesn't generate or exist + Optional bool `json:"optional,omitempty"` } // ArchiveStrategy describes how to archive files/directory when saving artifacts @@ -318,8 +371,14 @@ type ArtifactLocation struct { // Artifactory contains artifactory artifact location details Artifactory *ArtifactoryArtifact `json:"artifactory,omitempty"` + // HDFS contains HDFS artifact location details + HDFS *HDFSArtifact `json:"hdfs,omitempty"` + // Raw contains raw artifact location details Raw *RawArtifact `json:"raw,omitempty"` + + // GCS contains GCS artifact location details + GCS *GCSArtifact `json:"gcs,omitempty"` } // Outputs hold parameters, artifacts, and results from a step @@ -357,6 +416,10 @@ type WorkflowStep struct { // When is an expression in which the step should conditionally execute When string `json:"when,omitempty"` + + // ContinueOn makes argo to proceed with the following step even if this step fails. + // Errors and Failed states can be specified + ContinueOn *ContinueOn `json:"continueOn,omitempty"` } // Item expands a single workflow step into multiple parallel steps @@ -420,12 +483,12 @@ type Arguments struct { Artifacts []Artifact `json:"artifacts,omitempty"` } -// Sidecar is a container which runs alongside the main container -type Sidecar struct { +// UserContainer is a container specified by a user. +type UserContainer struct { apiv1.Container `json:",inline"` // MirrorVolumeMounts will mount the same volumes specified in the main container - // to the sidecar (including artifacts), at the same mountPaths. This enables + // to the container (including artifacts), at the same mountPaths. This enables // dind daemon to partially see the same filesystem as the main container in // order to use features such as docker volume binding MirrorVolumeMounts *bool `json:"mirrorVolumeMounts,omitempty"` @@ -446,6 +509,9 @@ type WorkflowStatus struct { // A human readable message indicating details about why the workflow is in this condition. Message string `json:"message,omitempty"` + // Compressed and base64 decoded Nodes map + CompressedNodes string `json:"compressedNodes,omitempty"` + // Nodes is a mapping between a node ID and the node's status. Nodes map[string]NodeStatus `json:"nodes,omitempty"` @@ -455,6 +521,9 @@ type WorkflowStatus struct { // Outputs captures output values and artifact locations produced by the workflow via global outputs Outputs *Outputs `json:"outputs,omitempty"` + + Errors []ExceptionResult `json:"errors,omitempty"` + Warnings []ExceptionResult `json:"warnings,omitempty"` } // RetryStrategy provides controls on how to retry a workflow step @@ -546,7 +615,7 @@ func (ws *WorkflowStatus) Completed() bool { // Remove returns whether or not the node has completed execution func (n NodeStatus) Completed() bool { - return isCompletedPhase(n.Phase) + return isCompletedPhase(n.Phase) || n.IsDaemoned() && n.Phase != NodePending } // IsDaemoned returns whether or not the node is deamoned @@ -559,7 +628,7 @@ func (n NodeStatus) IsDaemoned() bool { // Successful returns whether or not this node completed successfully func (n NodeStatus) Successful() bool { - return n.Phase == NodeSucceeded || n.Phase == NodeSkipped + return n.Phase == NodeSucceeded || n.Phase == NodeSkipped || n.IsDaemoned() && n.Phase != NodePending } // CanRetry returns whether the node should be retried or not. @@ -605,6 +674,30 @@ func (s *S3Artifact) String() string { return fmt.Sprintf("%s://%s/%s/%s", protocol, s.Endpoint, s.Bucket, s.Key) } +func (s *S3Artifact) HasLocation() bool { + return s != nil && s.Bucket != "" +} + +// GCSBucket contains the access information required for acting with a GCS bucket +type GCSBucket struct { + Bucket string `json:"bucket"` + CredentialsSecret apiv1.SecretKeySelector `json:"credentialsSecret"` +} + +// GCSArtifact is the location of a GCS artifact +type GCSArtifact struct { + GCSBucket `json:",inline"` + Key string `json:"key"` +} + +func (s *GCSArtifact) String() string { + return fmt.Sprintf("gs://%s/%s", s.Bucket, s.Key) +} + +func (s *GCSArtifact) HasLocation() bool { + return s != nil && s.Bucket != "" +} + // GitArtifact is the location of an git artifact type GitArtifact struct { // Repo is the git repository @@ -618,8 +711,16 @@ type GitArtifact struct { // PasswordSecret is the secret selector to the repository password PasswordSecret *apiv1.SecretKeySelector `json:"passwordSecret,omitempty"` + // SSHPrivateKeySecret is the secret selector to the repository ssh private key SSHPrivateKeySecret *apiv1.SecretKeySelector `json:"sshPrivateKeySecret,omitempty"` + + // InsecureIgnoreHostKey disables SSH strict host key checking during git clone + InsecureIgnoreHostKey bool `json:"insecureIgnoreHostKey,omitempty"` +} + +func (g *GitArtifact) HasLocation() bool { + return g != nil && g.Repo != "" } // ArtifactoryAuth describes the secret selectors required for authenticating to artifactory @@ -642,18 +743,96 @@ func (a *ArtifactoryArtifact) String() string { return a.URL } +func (a *ArtifactoryArtifact) HasLocation() bool { + return a != nil && a.URL != "" +} + +// HDFSArtifact is the location of an HDFS artifact +type HDFSArtifact struct { + HDFSConfig `json:",inline"` + + // Path is a file path in HDFS + Path string `json:"path"` + + // Force copies a file forcibly even if it exists (default: false) + Force bool `json:"force,omitempty"` +} + +func (h *HDFSArtifact) HasLocation() bool { + return h != nil && len(h.Addresses) > 0 +} + +// HDFSConfig is configurations for HDFS +type HDFSConfig struct { + HDFSKrbConfig `json:",inline"` + + // Addresses is accessible addresses of HDFS name nodes + Addresses []string `json:"addresses"` + + // HDFSUser is the user to access HDFS file system. + // It is ignored if either ccache or keytab is used. + HDFSUser string `json:"hdfsUser,omitempty"` +} + +// HDFSKrbConfig is auth configurations for Kerberos +type HDFSKrbConfig struct { + // KrbCCacheSecret is the secret selector for Kerberos ccache + // Either ccache or keytab can be set to use Kerberos. + KrbCCacheSecret *apiv1.SecretKeySelector `json:"krbCCacheSecret,omitempty"` + + // KrbKeytabSecret is the secret selector for Kerberos keytab + // Either ccache or keytab can be set to use Kerberos. + KrbKeytabSecret *apiv1.SecretKeySelector `json:"krbKeytabSecret,omitempty"` + + // KrbUsername is the Kerberos username used with Kerberos keytab + // It must be set if keytab is used. + KrbUsername string `json:"krbUsername,omitempty"` + + // KrbRealm is the Kerberos realm used with Kerberos keytab + // It must be set if keytab is used. + KrbRealm string `json:"krbRealm,omitempty"` + + // KrbConfig is the configmap selector for Kerberos config as string + // It must be set if either ccache or keytab is used. + KrbConfigConfigMap *apiv1.ConfigMapKeySelector `json:"krbConfigConfigMap,omitempty"` + + // KrbServicePrincipalName is the principal name of Kerberos service + // It must be set if either ccache or keytab is used. + KrbServicePrincipalName string `json:"krbServicePrincipalName,omitempty"` +} + +func (a *HDFSArtifact) String() string { + var cred string + if a.HDFSUser != "" { + cred = fmt.Sprintf("HDFS user %s", a.HDFSUser) + } else if a.KrbCCacheSecret != nil { + cred = fmt.Sprintf("ccache %v", a.KrbCCacheSecret.Name) + } else if a.KrbKeytabSecret != nil { + cred = fmt.Sprintf("keytab %v (%s/%s)", a.KrbKeytabSecret.Name, a.KrbUsername, a.KrbRealm) + } + return fmt.Sprintf("hdfs://%s/%s with %s", strings.Join(a.Addresses, ", "), a.Path, cred) +} + // RawArtifact allows raw string content to be placed as an artifact in a container type RawArtifact struct { // Data is the string contents of the artifact Data string `json:"data"` } +func (r *RawArtifact) HasLocation() bool { + return r != nil +} + // HTTPArtifact allows an file served on HTTP to be placed as an input artifact in a container type HTTPArtifact struct { // URL of the artifact URL string `json:"url"` } +func (h *HTTPArtifact) HasLocation() bool { + return h != nil && h.URL != "" +} + // ScriptTemplate is a template subtype to enable scripting through code steps type ScriptTemplate struct { apiv1.Container `json:",inline"` @@ -668,6 +847,10 @@ type ResourceTemplate struct { // Must be one of: get, create, apply, delete, replace Action string `json:"action"` + // MergeStrategy is the strategy used to merge a patch. It defaults to "strategic" + // Must be one of: strategic, merge, json + MergeStrategy string `json:"mergeStrategy,omitempty"` + // Manifest contains the kubernetes manifest Manifest string `json:"manifest"` @@ -680,6 +863,23 @@ type ResourceTemplate struct { FailureCondition string `json:"failureCondition,omitempty"` } +// ExceptionCondition is a container for defining an error or warning rule +type ExceptionCondition struct { + Name string `json:"name"` + PatternMatched string `json:"patternMatched,omitempty"` + PatternUnmatched string `json:"patternUnmatched,omitempty"` + Source string `json:"source,omitempty"` + Message string `json:"message,omitempty"` +} + +// ExceptionResult contains the results on an extended error or warning condition evaluation +type ExceptionResult struct { + Name string `json:"name"` + Message string `json:"message"` + PodName string `json:"podName"` + StepName string `json:"stepName"` +} + // GetType returns the type of this template func (tmpl *Template) GetType() TemplateType { if tmpl.Container != nil { @@ -756,6 +956,10 @@ type DAGTask struct { // When is an expression in which the task should conditionally execute When string `json:"when,omitempty"` + + // ContinueOn makes argo to proceed with the following step even if this step fails. + // Errors and Failed states can be specified + ContinueOn *ContinueOn `json:"continueOn,omitempty"` } // SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time @@ -829,7 +1033,13 @@ func (args *Arguments) GetParameterByName(name string) *Parameter { // HasLocation whether or not an artifact has a location defined func (a *Artifact) HasLocation() bool { - return a.S3 != nil || a.Git != nil || a.HTTP != nil || a.Artifactory != nil || a.Raw != nil + return a.S3.HasLocation() || + a.Git.HasLocation() || + a.HTTP.HasLocation() || + a.Artifactory.HasLocation() || + a.Raw.HasLocation() || + a.HDFS.HasLocation() || + a.GCS.HasLocation() } // GetTemplate retrieves a defined template by its name @@ -851,3 +1061,35 @@ func (wf *Workflow) NodeID(name string) string { _, _ = h.Write([]byte(name)) return fmt.Sprintf("%s-%v", wf.ObjectMeta.Name, h.Sum32()) } + +// ContinueOn defines if a workflow should continue even if a task or step fails/errors. +// It can be specified if the workflow should continue when the pod errors, fails or both. +type ContinueOn struct { + // +optional + Error bool `json:"error,omitempty"` + // +optional + Failed bool `json:"failed,omitempty"` +} + +func continues(c *ContinueOn, phase NodePhase) bool { + if c == nil { + return false + } + if c.Error == true && phase == NodeError { + return true + } + if c.Failed == true && phase == NodeFailed { + return true + } + return false +} + +// ContinuesOn returns whether the DAG should be proceeded if the task fails or errors. +func (t *DAGTask) ContinuesOn(phase NodePhase) bool { + return continues(t.ContinueOn, phase) +} + +// ContinuesOn returns whether the StepGroup should be proceeded if the task fails or errors. +func (s *WorkflowStep) ContinuesOn(phase NodePhase) bool { + return continues(s.ContinueOn, phase) +} diff --git a/pkg/apis/workflow/v1alpha1/zz_generated.deepcopy.go b/pkg/apis/workflow/v1alpha1/zz_generated.deepcopy.go index e8e1f77e4cca..9f9c336ae482 100644 --- a/pkg/apis/workflow/v1alpha1/zz_generated.deepcopy.go +++ b/pkg/apis/workflow/v1alpha1/zz_generated.deepcopy.go @@ -120,11 +120,21 @@ func (in *ArtifactLocation) DeepCopyInto(out *ArtifactLocation) { *out = new(ArtifactoryArtifact) (*in).DeepCopyInto(*out) } + if in.HDFS != nil { + in, out := &in.HDFS, &out.HDFS + *out = new(HDFSArtifact) + (*in).DeepCopyInto(*out) + } if in.Raw != nil { in, out := &in.Raw, &out.Raw *out = new(RawArtifact) **out = **in } + if in.GCS != nil { + in, out := &in.GCS, &out.GCS + *out = new(GCSArtifact) + (*in).DeepCopyInto(*out) + } return } @@ -181,6 +191,22 @@ func (in *ArtifactoryAuth) DeepCopy() *ArtifactoryAuth { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ContinueOn) DeepCopyInto(out *ContinueOn) { + *out = *in + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ContinueOn. +func (in *ContinueOn) DeepCopy() *ContinueOn { + if in == nil { + return nil + } + out := new(ContinueOn) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *DAGTask) DeepCopyInto(out *DAGTask) { *out = *in @@ -202,6 +228,11 @@ func (in *DAGTask) DeepCopyInto(out *DAGTask) { *out = new(Sequence) **out = **in } + if in.ContinueOn != nil { + in, out := &in.ContinueOn, &out.ContinueOn + *out = new(ContinueOn) + **out = **in + } return } @@ -238,6 +269,72 @@ func (in *DAGTemplate) DeepCopy() *DAGTemplate { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ExceptionCondition) DeepCopyInto(out *ExceptionCondition) { + *out = *in + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExceptionCondition. +func (in *ExceptionCondition) DeepCopy() *ExceptionCondition { + if in == nil { + return nil + } + out := new(ExceptionCondition) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ExceptionResult) DeepCopyInto(out *ExceptionResult) { + *out = *in + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExceptionResult. +func (in *ExceptionResult) DeepCopy() *ExceptionResult { + if in == nil { + return nil + } + out := new(ExceptionResult) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *GCSArtifact) DeepCopyInto(out *GCSArtifact) { + *out = *in + in.GCSBucket.DeepCopyInto(&out.GCSBucket) + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GCSArtifact. +func (in *GCSArtifact) DeepCopy() *GCSArtifact { + if in == nil { + return nil + } + out := new(GCSArtifact) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *GCSBucket) DeepCopyInto(out *GCSBucket) { + *out = *in + in.CredentialsSecret.DeepCopyInto(&out.CredentialsSecret) + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GCSBucket. +func (in *GCSBucket) DeepCopy() *GCSBucket { + if in == nil { + return nil + } + out := new(GCSBucket) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *GitArtifact) DeepCopyInto(out *GitArtifact) { *out = *in @@ -269,6 +366,76 @@ func (in *GitArtifact) DeepCopy() *GitArtifact { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *HDFSArtifact) DeepCopyInto(out *HDFSArtifact) { + *out = *in + in.HDFSConfig.DeepCopyInto(&out.HDFSConfig) + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HDFSArtifact. +func (in *HDFSArtifact) DeepCopy() *HDFSArtifact { + if in == nil { + return nil + } + out := new(HDFSArtifact) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *HDFSConfig) DeepCopyInto(out *HDFSConfig) { + *out = *in + in.HDFSKrbConfig.DeepCopyInto(&out.HDFSKrbConfig) + if in.Addresses != nil { + in, out := &in.Addresses, &out.Addresses + *out = make([]string, len(*in)) + copy(*out, *in) + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HDFSConfig. +func (in *HDFSConfig) DeepCopy() *HDFSConfig { + if in == nil { + return nil + } + out := new(HDFSConfig) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *HDFSKrbConfig) DeepCopyInto(out *HDFSKrbConfig) { + *out = *in + if in.KrbCCacheSecret != nil { + in, out := &in.KrbCCacheSecret, &out.KrbCCacheSecret + *out = new(v1.SecretKeySelector) + (*in).DeepCopyInto(*out) + } + if in.KrbKeytabSecret != nil { + in, out := &in.KrbKeytabSecret, &out.KrbKeytabSecret + *out = new(v1.SecretKeySelector) + (*in).DeepCopyInto(*out) + } + if in.KrbConfigConfigMap != nil { + in, out := &in.KrbConfigConfigMap, &out.KrbConfigConfigMap + *out = new(v1.ConfigMapKeySelector) + (*in).DeepCopyInto(*out) + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HDFSKrbConfig. +func (in *HDFSKrbConfig) DeepCopy() *HDFSKrbConfig { + if in == nil { + return nil + } + out := new(HDFSKrbConfig) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *HTTPArtifact) DeepCopyInto(out *HTTPArtifact) { *out = *in @@ -606,28 +773,6 @@ func (in *Sequence) DeepCopy() *Sequence { return out } -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *Sidecar) DeepCopyInto(out *Sidecar) { - *out = *in - in.Container.DeepCopyInto(&out.Container) - if in.MirrorVolumeMounts != nil { - in, out := &in.MirrorVolumeMounts, &out.MirrorVolumeMounts - *out = new(bool) - **out = **in - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Sidecar. -func (in *Sidecar) DeepCopy() *Sidecar { - if in == nil { - return nil - } - out := new(Sidecar) - in.DeepCopyInto(out) - return out -} - // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *SuspendTemplate) DeepCopyInto(out *SuspendTemplate) { *out = *in @@ -721,9 +866,23 @@ func (in *Template) DeepCopyInto(out *Template) { *out = new(SuspendTemplate) **out = **in } + if in.Volumes != nil { + in, out := &in.Volumes, &out.Volumes + *out = make([]v1.Volume, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.InitContainers != nil { + in, out := &in.InitContainers, &out.InitContainers + *out = make([]UserContainer, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } if in.Sidecars != nil { in, out := &in.Sidecars, &out.Sidecars - *out = make([]Sidecar, len(*in)) + *out = make([]UserContainer, len(*in)) for i := range *in { (*in)[i].DeepCopyInto(&(*out)[i]) } @@ -755,6 +914,21 @@ func (in *Template) DeepCopyInto(out *Template) { (*in)[i].DeepCopyInto(&(*out)[i]) } } + if in.Priority != nil { + in, out := &in.Priority, &out.Priority + *out = new(int32) + **out = **in + } + if in.Errors != nil { + in, out := &in.Errors, &out.Errors + *out = make([]ExceptionCondition, len(*in)) + copy(*out, *in) + } + if in.Warnings != nil { + in, out := &in.Warnings, &out.Warnings + *out = make([]ExceptionCondition, len(*in)) + copy(*out, *in) + } return } @@ -768,6 +942,28 @@ func (in *Template) DeepCopy() *Template { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *UserContainer) DeepCopyInto(out *UserContainer) { + *out = *in + in.Container.DeepCopyInto(&out.Container) + if in.MirrorVolumeMounts != nil { + in, out := &in.MirrorVolumeMounts, &out.MirrorVolumeMounts + *out = new(bool) + **out = **in + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new UserContainer. +func (in *UserContainer) DeepCopy() *UserContainer { + if in == nil { + return nil + } + out := new(UserContainer) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *ValueFrom) DeepCopyInto(out *ValueFrom) { *out = *in @@ -904,6 +1100,21 @@ func (in *WorkflowSpec) DeepCopyInto(out *WorkflowSpec) { *out = make([]v1.LocalObjectReference, len(*in)) copy(*out, *in) } + if in.HostNetwork != nil { + in, out := &in.HostNetwork, &out.HostNetwork + *out = new(bool) + **out = **in + } + if in.DNSPolicy != nil { + in, out := &in.DNSPolicy, &out.DNSPolicy + *out = new(v1.DNSPolicy) + **out = **in + } + if in.DNSConfig != nil { + in, out := &in.DNSConfig, &out.DNSConfig + *out = new(v1.PodDNSConfig) + (*in).DeepCopyInto(*out) + } if in.TTLSecondsAfterFinished != nil { in, out := &in.TTLSecondsAfterFinished, &out.TTLSecondsAfterFinished *out = new(int32) @@ -919,6 +1130,11 @@ func (in *WorkflowSpec) DeepCopyInto(out *WorkflowSpec) { *out = new(int32) **out = **in } + if in.PodPriority != nil { + in, out := &in.PodPriority, &out.PodPriority + *out = new(int32) + **out = **in + } return } @@ -956,6 +1172,16 @@ func (in *WorkflowStatus) DeepCopyInto(out *WorkflowStatus) { *out = new(Outputs) (*in).DeepCopyInto(*out) } + if in.Errors != nil { + in, out := &in.Errors, &out.Errors + *out = make([]ExceptionResult, len(*in)) + copy(*out, *in) + } + if in.Warnings != nil { + in, out := &in.Warnings, &out.Warnings + *out = make([]ExceptionResult, len(*in)) + copy(*out, *in) + } return } @@ -985,6 +1211,11 @@ func (in *WorkflowStep) DeepCopyInto(out *WorkflowStep) { *out = new(Sequence) **out = **in } + if in.ContinueOn != nil { + in, out := &in.ContinueOn, &out.ContinueOn + *out = new(ContinueOn) + **out = **in + } return } diff --git a/pkg/client/clientset/versioned/clientset.go b/pkg/client/clientset/versioned/clientset.go index 3a54ecc84fce..0ec90bd0b226 100644 --- a/pkg/client/clientset/versioned/clientset.go +++ b/pkg/client/clientset/versioned/clientset.go @@ -3,7 +3,8 @@ package versioned import ( - argoprojv1alpha1 "github.com/argoproj/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" + argoprojv1alpha1 "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" + glog "github.com/golang/glog" discovery "k8s.io/client-go/discovery" rest "k8s.io/client-go/rest" flowcontrol "k8s.io/client-go/util/flowcontrol" @@ -57,6 +58,7 @@ func NewForConfig(c *rest.Config) (*Clientset, error) { cs.DiscoveryClient, err = discovery.NewDiscoveryClientForConfig(&configShallowCopy) if err != nil { + glog.Errorf("failed to create the DiscoveryClient: %v", err) return nil, err } return &cs, nil diff --git a/pkg/client/clientset/versioned/fake/clientset_generated.go b/pkg/client/clientset/versioned/fake/clientset_generated.go index 4c0de7d99d6b..dbe358014b09 100644 --- a/pkg/client/clientset/versioned/fake/clientset_generated.go +++ b/pkg/client/clientset/versioned/fake/clientset_generated.go @@ -3,9 +3,9 @@ package fake import ( - clientset "github.com/argoproj/argo/pkg/client/clientset/versioned" - argoprojv1alpha1 "github.com/argoproj/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" - fakeargoprojv1alpha1 "github.com/argoproj/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1/fake" + clientset "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned" + argoprojv1alpha1 "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" + fakeargoprojv1alpha1 "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1/fake" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/watch" "k8s.io/client-go/discovery" diff --git a/pkg/client/clientset/versioned/fake/register.go b/pkg/client/clientset/versioned/fake/register.go index f2677c800686..946f09d8edb0 100644 --- a/pkg/client/clientset/versioned/fake/register.go +++ b/pkg/client/clientset/versioned/fake/register.go @@ -3,7 +3,7 @@ package fake import ( - argoprojv1alpha1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + argoprojv1alpha1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" v1 "k8s.io/apimachinery/pkg/apis/meta/v1" runtime "k8s.io/apimachinery/pkg/runtime" schema "k8s.io/apimachinery/pkg/runtime/schema" diff --git a/pkg/client/clientset/versioned/scheme/register.go b/pkg/client/clientset/versioned/scheme/register.go index 0b62f7b19539..fd3a0f3abd21 100644 --- a/pkg/client/clientset/versioned/scheme/register.go +++ b/pkg/client/clientset/versioned/scheme/register.go @@ -3,7 +3,7 @@ package scheme import ( - argoprojv1alpha1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + argoprojv1alpha1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" v1 "k8s.io/apimachinery/pkg/apis/meta/v1" runtime "k8s.io/apimachinery/pkg/runtime" schema "k8s.io/apimachinery/pkg/runtime/schema" diff --git a/pkg/client/clientset/versioned/typed/workflow/v1alpha1/fake/fake_workflow.go b/pkg/client/clientset/versioned/typed/workflow/v1alpha1/fake/fake_workflow.go index f403ed543a9b..46437d43e727 100644 --- a/pkg/client/clientset/versioned/typed/workflow/v1alpha1/fake/fake_workflow.go +++ b/pkg/client/clientset/versioned/typed/workflow/v1alpha1/fake/fake_workflow.go @@ -3,7 +3,7 @@ package fake import ( - v1alpha1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + v1alpha1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" v1 "k8s.io/apimachinery/pkg/apis/meta/v1" labels "k8s.io/apimachinery/pkg/labels" schema "k8s.io/apimachinery/pkg/runtime/schema" diff --git a/pkg/client/clientset/versioned/typed/workflow/v1alpha1/fake/fake_workflow_client.go b/pkg/client/clientset/versioned/typed/workflow/v1alpha1/fake/fake_workflow_client.go index 5c3db6508c01..3cb6c5be8f3e 100644 --- a/pkg/client/clientset/versioned/typed/workflow/v1alpha1/fake/fake_workflow_client.go +++ b/pkg/client/clientset/versioned/typed/workflow/v1alpha1/fake/fake_workflow_client.go @@ -3,7 +3,7 @@ package fake import ( - v1alpha1 "github.com/argoproj/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" + v1alpha1 "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" rest "k8s.io/client-go/rest" testing "k8s.io/client-go/testing" ) diff --git a/pkg/client/clientset/versioned/typed/workflow/v1alpha1/workflow.go b/pkg/client/clientset/versioned/typed/workflow/v1alpha1/workflow.go index 8cd0b2d77102..503516044edf 100644 --- a/pkg/client/clientset/versioned/typed/workflow/v1alpha1/workflow.go +++ b/pkg/client/clientset/versioned/typed/workflow/v1alpha1/workflow.go @@ -3,8 +3,8 @@ package v1alpha1 import ( - v1alpha1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - scheme "github.com/argoproj/argo/pkg/client/clientset/versioned/scheme" + v1alpha1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + scheme "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned/scheme" v1 "k8s.io/apimachinery/pkg/apis/meta/v1" types "k8s.io/apimachinery/pkg/types" watch "k8s.io/apimachinery/pkg/watch" diff --git a/pkg/client/clientset/versioned/typed/workflow/v1alpha1/workflow_client.go b/pkg/client/clientset/versioned/typed/workflow/v1alpha1/workflow_client.go index 0250af28aa13..c0399ac245d2 100644 --- a/pkg/client/clientset/versioned/typed/workflow/v1alpha1/workflow_client.go +++ b/pkg/client/clientset/versioned/typed/workflow/v1alpha1/workflow_client.go @@ -3,8 +3,8 @@ package v1alpha1 import ( - v1alpha1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/pkg/client/clientset/versioned/scheme" + v1alpha1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned/scheme" serializer "k8s.io/apimachinery/pkg/runtime/serializer" rest "k8s.io/client-go/rest" ) diff --git a/pkg/client/informers/externalversions/factory.go b/pkg/client/informers/externalversions/factory.go index 5209b380583f..f1445110aef4 100644 --- a/pkg/client/informers/externalversions/factory.go +++ b/pkg/client/informers/externalversions/factory.go @@ -7,9 +7,9 @@ import ( sync "sync" time "time" - versioned "github.com/argoproj/argo/pkg/client/clientset/versioned" - internalinterfaces "github.com/argoproj/argo/pkg/client/informers/externalversions/internalinterfaces" - workflow "github.com/argoproj/argo/pkg/client/informers/externalversions/workflow" + versioned "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned" + internalinterfaces "github.com/cyrusbiotechnology/argo/pkg/client/informers/externalversions/internalinterfaces" + workflow "github.com/cyrusbiotechnology/argo/pkg/client/informers/externalversions/workflow" v1 "k8s.io/apimachinery/pkg/apis/meta/v1" runtime "k8s.io/apimachinery/pkg/runtime" schema "k8s.io/apimachinery/pkg/runtime/schema" diff --git a/pkg/client/informers/externalversions/generic.go b/pkg/client/informers/externalversions/generic.go index 91f10b389128..75892ece0f00 100644 --- a/pkg/client/informers/externalversions/generic.go +++ b/pkg/client/informers/externalversions/generic.go @@ -5,7 +5,7 @@ package externalversions import ( "fmt" - v1alpha1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + v1alpha1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" schema "k8s.io/apimachinery/pkg/runtime/schema" cache "k8s.io/client-go/tools/cache" ) diff --git a/pkg/client/informers/externalversions/internalinterfaces/factory_interfaces.go b/pkg/client/informers/externalversions/internalinterfaces/factory_interfaces.go index 6a21272fa0d8..51adb18fa376 100644 --- a/pkg/client/informers/externalversions/internalinterfaces/factory_interfaces.go +++ b/pkg/client/informers/externalversions/internalinterfaces/factory_interfaces.go @@ -5,7 +5,7 @@ package internalinterfaces import ( time "time" - versioned "github.com/argoproj/argo/pkg/client/clientset/versioned" + versioned "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned" v1 "k8s.io/apimachinery/pkg/apis/meta/v1" runtime "k8s.io/apimachinery/pkg/runtime" cache "k8s.io/client-go/tools/cache" diff --git a/pkg/client/informers/externalversions/workflow/interface.go b/pkg/client/informers/externalversions/workflow/interface.go index d49d769f4b22..067f390c95ae 100644 --- a/pkg/client/informers/externalversions/workflow/interface.go +++ b/pkg/client/informers/externalversions/workflow/interface.go @@ -3,8 +3,8 @@ package argoproj import ( - internalinterfaces "github.com/argoproj/argo/pkg/client/informers/externalversions/internalinterfaces" - v1alpha1 "github.com/argoproj/argo/pkg/client/informers/externalversions/workflow/v1alpha1" + internalinterfaces "github.com/cyrusbiotechnology/argo/pkg/client/informers/externalversions/internalinterfaces" + v1alpha1 "github.com/cyrusbiotechnology/argo/pkg/client/informers/externalversions/workflow/v1alpha1" ) // Interface provides access to each of this group's versions. diff --git a/pkg/client/informers/externalversions/workflow/v1alpha1/interface.go b/pkg/client/informers/externalversions/workflow/v1alpha1/interface.go index ea59def400ca..b78328ce9591 100644 --- a/pkg/client/informers/externalversions/workflow/v1alpha1/interface.go +++ b/pkg/client/informers/externalversions/workflow/v1alpha1/interface.go @@ -3,7 +3,7 @@ package v1alpha1 import ( - internalinterfaces "github.com/argoproj/argo/pkg/client/informers/externalversions/internalinterfaces" + internalinterfaces "github.com/cyrusbiotechnology/argo/pkg/client/informers/externalversions/internalinterfaces" ) // Interface provides access to all the informers in this group version. diff --git a/pkg/client/informers/externalversions/workflow/v1alpha1/workflow.go b/pkg/client/informers/externalversions/workflow/v1alpha1/workflow.go index b86cce4c66a3..49ea797b9eda 100644 --- a/pkg/client/informers/externalversions/workflow/v1alpha1/workflow.go +++ b/pkg/client/informers/externalversions/workflow/v1alpha1/workflow.go @@ -5,10 +5,10 @@ package v1alpha1 import ( time "time" - workflowv1alpha1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - versioned "github.com/argoproj/argo/pkg/client/clientset/versioned" - internalinterfaces "github.com/argoproj/argo/pkg/client/informers/externalversions/internalinterfaces" - v1alpha1 "github.com/argoproj/argo/pkg/client/listers/workflow/v1alpha1" + workflowv1alpha1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + versioned "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned" + internalinterfaces "github.com/cyrusbiotechnology/argo/pkg/client/informers/externalversions/internalinterfaces" + v1alpha1 "github.com/cyrusbiotechnology/argo/pkg/client/listers/workflow/v1alpha1" v1 "k8s.io/apimachinery/pkg/apis/meta/v1" runtime "k8s.io/apimachinery/pkg/runtime" watch "k8s.io/apimachinery/pkg/watch" diff --git a/pkg/client/listers/workflow/v1alpha1/workflow.go b/pkg/client/listers/workflow/v1alpha1/workflow.go index f54d24dde7e6..98f66150e8ac 100644 --- a/pkg/client/listers/workflow/v1alpha1/workflow.go +++ b/pkg/client/listers/workflow/v1alpha1/workflow.go @@ -3,7 +3,7 @@ package v1alpha1 import ( - v1alpha1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + v1alpha1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" "k8s.io/apimachinery/pkg/api/errors" "k8s.io/apimachinery/pkg/labels" "k8s.io/client-go/tools/cache" diff --git a/test/e2e/expectedfailures/disallow-unknown.json b/test/e2e/expectedfailures/disallow-unknown.json deleted file mode 100644 index 659d97d24dec..000000000000 --- a/test/e2e/expectedfailures/disallow-unknown.json +++ /dev/null @@ -1,25 +0,0 @@ -{ - "apiVersion": "argoproj.io/v1alpha1", - "kind": "Workflow", - "metadata": { - "generateName": "hello-world-" - }, - "spec": { - "entrypoint": "whalesay", - "templates": [ - { - "name": "whalesay", - "container": { - "image": "docker/whalesay:latest", - "command": [ - "cowsay" - ], - "args": [ - "hello world" - ], - "someExtraField": "foo" - } - } - ] - } -} diff --git a/test/e2e/expectedfailures/failed-retries.yaml b/test/e2e/expectedfailures/failed-retries.yaml new file mode 100644 index 000000000000..2930d2677f88 --- /dev/null +++ b/test/e2e/expectedfailures/failed-retries.yaml @@ -0,0 +1,30 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: failed-retries- +spec: + entrypoint: failed-retries + + templates: + - name: failed-retries + steps: + - - name: fail + template: fail + - name: delayed-fail + template: delayed-fail + + - name: fail + retryStrategy: + limit: 1 + container: + image: alpine:latest + command: [sh, -c] + args: ["exit 1"] + + - name: delayed-fail + retryStrategy: + limit: 1 + container: + image: alpine:latest + command: [sh, -c] + args: ["sleep 1; exit 1"] diff --git a/test/e2e/expectedfailures/input-artifact-not-optional.yaml b/test/e2e/expectedfailures/input-artifact-not-optional.yaml new file mode 100644 index 000000000000..e1a3615c71ad --- /dev/null +++ b/test/e2e/expectedfailures/input-artifact-not-optional.yaml @@ -0,0 +1,22 @@ +# This example demonstrates the input artifacts not optionals +# from one step to the next. +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: input-artifact-not-optional- +spec: + entrypoint: http-artifact-example + templates: + - name: http-artifact-example + inputs: + artifacts: + - name: kubectl + path: /bin/kubectl + mode: 0755 + optional: false + http: + url: "" + container: + image: debian:9.4 + command: [sh, -c] + args: ["echo NoKubectl"] diff --git a/test/e2e/expectedfailures/output-artifact-not-optional.yaml b/test/e2e/expectedfailures/output-artifact-not-optional.yaml new file mode 100644 index 000000000000..d6fe97da86b6 --- /dev/null +++ b/test/e2e/expectedfailures/output-artifact-not-optional.yaml @@ -0,0 +1,24 @@ +# This example demonstrates the output artifacts not optionals +# from one step to the next. +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: output-artifact-not-optional- +spec: + entrypoint: artifact-example + templates: + - name: artifact-example + steps: + - - name: generate-artifact + template: whalesay + + - name: whalesay + container: + image: docker/whalesay:latest + command: [sh, -c] + args: ["cowsay hello world | tee /tmp/hello_world12.txt"] + outputs: + artifacts: + - name: hello-art + optional: false + path: /tmp/hello_world.txt diff --git a/test/e2e/expectedfailures/pns/pns-output-artifacts.yaml b/test/e2e/expectedfailures/pns/pns-output-artifacts.yaml new file mode 100644 index 000000000000..9680ef096507 --- /dev/null +++ b/test/e2e/expectedfailures/pns/pns-output-artifacts.yaml @@ -0,0 +1,39 @@ +# Workflow specifically designed for testing process namespace sharing with output artifacts +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: pns-output-artifacts- +spec: + entrypoint: pns-output-artifacts + templates: + - name: pns-output-artifacts + archiveLocation: + archiveLogs: true + container: + image: debian:9.2 + command: [sh, -c] + args: [" + echo hello world > /mnt/workdir/foo && + echo stdout && + echo '' && + echo stderr >&2 && + sleep 1 + "] + volumeMounts: + - name: workdir + mountPath: /mnt/workdir + outputs: + artifacts: + - name: etc + path: /etc + - name: mnt + path: /mnt + - name: workdir + path: /mnt/workdir + sidecars: + - name: nginx + image: nginx:latest + + volumes: + - name: workdir + emptyDir: {} diff --git a/test/e2e/expectedfailures/pns/pns-quick-exit-output-art.yaml b/test/e2e/expectedfailures/pns/pns-quick-exit-output-art.yaml new file mode 100644 index 000000000000..286a82846e26 --- /dev/null +++ b/test/e2e/expectedfailures/pns/pns-quick-exit-output-art.yaml @@ -0,0 +1,30 @@ +# Workflow specifically designed for testing process namespace sharing with output artifacts +# This fails because the main container exits before the wait sidecar is able to establish the file +# handle of the main container's root filesystem. +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: pns-quick-exit-output-art- +spec: + entrypoint: pns-quick-exit-output-art + templates: + - name: pns-quick-exit-output-art + archiveLocation: + archiveLogs: true + container: + image: debian:9.2 + command: [sh, -x, -c] + args: [" + touch /mnt/workdir/foo + "] + volumeMounts: + - name: workdir + mountPath: /mnt/workdir + outputs: + artifacts: + - name: mnt + path: /mnt + + volumes: + - name: workdir + emptyDir: {} diff --git a/test/e2e/functional/artifact-disable-archive.yaml b/test/e2e/functional/artifact-disable-archive.yaml deleted file mode 100644 index f1f41e1bad39..000000000000 --- a/test/e2e/functional/artifact-disable-archive.yaml +++ /dev/null @@ -1,49 +0,0 @@ -# This tests the disabling of archive, and ability to recursively copy a directory -apiVersion: argoproj.io/v1alpha1 -kind: Workflow -metadata: - generateName: artifact-disable-archive- -spec: - entrypoint: artifact-example - templates: - - name: artifact-example - steps: - - - name: generate-artifact - template: whalesay - - - name: consume-artifact - template: print-message - arguments: - artifacts: - - name: etc - from: "{{steps.generate-artifact.outputs.artifacts.etc}}" - - name: hello-txt - from: "{{steps.generate-artifact.outputs.artifacts.hello-txt}}" - - - name: whalesay - container: - image: docker/whalesay:latest - command: [sh, -c] - args: ["cowsay hello world | tee /tmp/hello_world.txt"] - outputs: - artifacts: - - name: etc - path: /etc - archive: - none: {} - - name: hello-txt - path: /tmp/hello_world.txt - archive: - none: {} - - - name: print-message - inputs: - artifacts: - - name: etc - path: /tmp/etc - - name: hello-txt - path: /tmp/hello.txt - container: - image: alpine:latest - command: [sh, -c] - args: - - cat /tmp/hello.txt && cd /tmp/etc && find . diff --git a/test/e2e/functional/artifact-disable-archive.yaml b/test/e2e/functional/artifact-disable-archive.yaml new file mode 120000 index 000000000000..109a8c619867 --- /dev/null +++ b/test/e2e/functional/artifact-disable-archive.yaml @@ -0,0 +1 @@ +../../../examples/artifact-disable-archive.yaml \ No newline at end of file diff --git a/test/e2e/functional/continue-on-fail.yaml b/test/e2e/functional/continue-on-fail.yaml new file mode 120000 index 000000000000..3bb5bfc75322 --- /dev/null +++ b/test/e2e/functional/continue-on-fail.yaml @@ -0,0 +1 @@ +../../../examples/continue-on-fail.yaml \ No newline at end of file diff --git a/test/e2e/functional/custom_template_variable.yaml b/test/e2e/functional/custom_template_variable.yaml new file mode 100644 index 000000000000..f9ee8fca8df2 --- /dev/null +++ b/test/e2e/functional/custom_template_variable.yaml @@ -0,0 +1,32 @@ +# This template demonstrates the customer variable suppport. +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: custom-template-variable- +spec: + entrypoint: hello-hello-hello + + templates: + - name: hello-hello-hello + steps: + - - name: hello1 + template: whalesay + arguments: + parameters: [{name: message, value: "hello1"}] + - - name: hello2a + template: whalesay + arguments: + parameters: [{name: message, value: "hello2a"}] + - name: hello2b + template: whalesay + arguments: + parameters: [{name: message, value: "hello2b"}] + + - name: whalesay + inputs: + parameters: + - name: message + container: + image: docker/whalesay + command: [cowsay] + args: ["{{custom.variable}}"] diff --git a/test/e2e/functional/dag-argument-passing.yaml b/test/e2e/functional/dag-argument-passing.yaml index c1e51a6bb61c..24f5c7aa8c8b 100644 --- a/test/e2e/functional/dag-argument-passing.yaml +++ b/test/e2e/functional/dag-argument-passing.yaml @@ -2,7 +2,7 @@ apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: - generateName: dag-arg-passing- + generateName: dag-argument-passing- spec: entrypoint: dag-arg-passing templates: @@ -16,7 +16,7 @@ spec: container: image: alpine:3.7 command: [sh, -c, -x] - args: ['echo "{{inputs.parameters.message}}"; cat /tmp/passthrough'] + args: ['sleep 1; echo "{{inputs.parameters.message}}"; cat /tmp/passthrough'] outputs: parameters: - name: hosts diff --git a/test/e2e/functional/git-clone-test.yaml b/test/e2e/functional/git-clone-test.yaml index 044259bf0fcb..056a6c44e8c6 100644 --- a/test/e2e/functional/git-clone-test.yaml +++ b/test/e2e/functional/git-clone-test.yaml @@ -30,7 +30,7 @@ spec: - name: argo-source path: /src git: - repo: https://github.com/argoproj/argo.git + repo: https://github.com/cyrusbiotechnology/argo.git revision: "{{inputs.parameters.revision}}" container: image: golang:1.8 diff --git a/test/e2e/functional/global-outputs-dag.yaml b/test/e2e/functional/global-outputs-dag.yaml index fa7eeb449847..cea147513f7b 100644 --- a/test/e2e/functional/global-outputs-dag.yaml +++ b/test/e2e/functional/global-outputs-dag.yaml @@ -21,7 +21,7 @@ spec: container: image: alpine:3.7 command: [sh, -c] - args: ["echo -n hello world > /tmp/hello_world.txt"] + args: ["sleep 1; echo -n hello world > /tmp/hello_world.txt"] outputs: parameters: # export a global parameter. The parameter will be programatically available in the completed diff --git a/test/e2e/functional/global-outputs-variable.yaml b/test/e2e/functional/global-outputs-variable.yaml index eed27afd1cc0..ca2222e6f61a 100644 --- a/test/e2e/functional/global-outputs-variable.yaml +++ b/test/e2e/functional/global-outputs-variable.yaml @@ -23,7 +23,7 @@ spec: container: image: alpine:3.7 command: [sh, -c] - args: ["echo -n hello world > /tmp/hello_world.txt"] + args: ["sleep 1; echo -n hello world > /tmp/hello_world.txt"] outputs: parameters: - name: hello-param diff --git a/test/e2e/functional/init-container.yaml b/test/e2e/functional/init-container.yaml new file mode 120000 index 000000000000..fe78772b05ed --- /dev/null +++ b/test/e2e/functional/init-container.yaml @@ -0,0 +1 @@ +../../../examples/init-container.yaml \ No newline at end of file diff --git a/test/e2e/functional/input-artifact-optional.yaml b/test/e2e/functional/input-artifact-optional.yaml new file mode 100644 index 000000000000..9b7a8a051b19 --- /dev/null +++ b/test/e2e/functional/input-artifact-optional.yaml @@ -0,0 +1,22 @@ +# This example demonstrates the input artifacts optionals +# from one step to the next. +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: input-artifact-optional- +spec: + entrypoint: http-artifact-example + templates: + - name: http-artifact-example + inputs: + artifacts: + - name: kubectl + path: /bin/kubectl + mode: 0755 + optional: true + http: + url: "" + container: + image: debian:9.4 + command: [sh, -c] + args: ["echo NoKubectl"] diff --git a/test/e2e/functional/dag-outputs.yaml b/test/e2e/functional/nested-dag-outputs.yaml similarity index 99% rename from test/e2e/functional/dag-outputs.yaml rename to test/e2e/functional/nested-dag-outputs.yaml index 89ecc41130cc..8cc92c5003da 100644 --- a/test/e2e/functional/dag-outputs.yaml +++ b/test/e2e/functional/nested-dag-outputs.yaml @@ -38,6 +38,7 @@ spec: image: docker/whalesay:latest command: [sh, -c] args: [" + sleep 1; cowsay hello world | tee /tmp/my-output-artifact.txt && echo 'my-output-parameter' > /tmp/my-output-parameter.txt "] diff --git a/test/e2e/functional/output-artifact-optional.yaml b/test/e2e/functional/output-artifact-optional.yaml new file mode 100644 index 000000000000..803289d6ca85 --- /dev/null +++ b/test/e2e/functional/output-artifact-optional.yaml @@ -0,0 +1,24 @@ +# This example demonstrates the output artifacts optionals +# from one step to the next. +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: output-artifact-optional- +spec: + entrypoint: artifact-example + templates: + - name: artifact-example + steps: + - - name: generate-artifact + template: whalesay + + - name: whalesay + container: + image: docker/whalesay:latest + command: [sh, -c] + args: ["sleep 1; cowsay hello world | tee /tmp/hello_world12.txt"] + outputs: + artifacts: + - name: hello-art + optional: true + path: /tmp/hello_world.txt diff --git a/test/e2e/functional/output-input-artifact-optional.yaml b/test/e2e/functional/output-input-artifact-optional.yaml new file mode 100644 index 000000000000..f1519df74d4e --- /dev/null +++ b/test/e2e/functional/output-input-artifact-optional.yaml @@ -0,0 +1,40 @@ +# This example demonstrates the output and input artifacts are optionals +# from one step to the next. +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: output-input-artifact-optional- +spec: + entrypoint: artifact-example + templates: + - name: artifact-example + steps: + - - name: generate-artifact + template: whalesay + - - name: consume-artifact + template: print-message + arguments: + artifacts: + - name: message + from: "{{steps.generate-artifact.outputs.artifacts.hello-art}}" + - name: whalesay + container: + image: docker/whalesay:latest + command: [sh, -c] + args: ["sleep 1; cowsay hello world | tee /tmp/hello_world123.txt"] + outputs: + artifacts: + - name: hello-art + optional: true + path: /tmp/hello_world.txt + + - name: print-message + inputs: + artifacts: + - name: message + path: /tmp/message + optional: true + container: + image: alpine:latest + command: [sh, -c] + args: ["echo /tmp/message"] diff --git a/test/e2e/functional/output-param-different-uid.yaml b/test/e2e/functional/output-param-different-uid.yaml new file mode 100644 index 000000000000..dbb7942fc945 --- /dev/null +++ b/test/e2e/functional/output-param-different-uid.yaml @@ -0,0 +1,27 @@ +# Tests PNS ability to capture output artifact when user id is different +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: pns-output-parameter-different-user- +spec: + entrypoint: multi-whalesay + templates: + - name: multi-whalesay + steps: + - - name: whalesay + template: whalesay + withSequence: + count: "10" + + - name: whalesay + container: + image: docker/whalesay:latest + command: [sh, -c] + args: ["sleep 1; cowsay hello world | tee /tmp/hello_world.txt"] + securityContext: + runAsUser: 1234 + outputs: + parameters: + - name: hello-art + valueFrom: + path: /tmp/hello_world.txt \ No newline at end of file diff --git a/test/e2e/functional/pns-output-params.yaml b/test/e2e/functional/pns-output-params.yaml new file mode 100644 index 000000000000..fe0001d38322 --- /dev/null +++ b/test/e2e/functional/pns-output-params.yaml @@ -0,0 +1,71 @@ +# Workflow specifically designed for testing process namespace sharing with output parameters +# This exercises the copy out regular files from volume mounted paths, or base image layer paths, +# including overlaps between the two. +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: pns-outputs-params- +spec: + entrypoint: output-parameter + templates: + - name: output-parameter + steps: + - - name: generate-parameter + template: whalesay + - - name: consume-parameter + template: print-message + arguments: + parameters: + - { name: A, value: "{{steps.generate-parameter.outputs.parameters.A}}" } + - { name: B, value: "{{steps.generate-parameter.outputs.parameters.B}}" } + - { name: C, value: "{{steps.generate-parameter.outputs.parameters.C}}" } + - { name: D, value: "{{steps.generate-parameter.outputs.parameters.D}}" } + + - name: whalesay + container: + image: docker/whalesay:latest + command: [sh, -x, -c] + args: [" + sleep 1; + echo -n A > /tmp/A && + echo -n B > /mnt/outer/inner/B && + echo -n C > /tmp/C && + echo -n D > /mnt/outer/D + "] + volumeMounts: + - name: outer + mountPath: /mnt/outer + - name: inner + mountPath: /mnt/outer/inner + outputs: + parameters: + - name: A + valueFrom: + path: /tmp/A + - name: B + valueFrom: + path: /mnt/outer/inner/B + - name: C + valueFrom: + path: /tmp/C + - name: D + valueFrom: + path: /mnt/outer/D + + - name: print-message + inputs: + parameters: + - name: A + - name: B + - name: C + - name: D + container: + image: docker/whalesay:latest + command: [cowsay] + args: ["{{inputs.parameters.A}} {{inputs.parameters.B}} {{inputs.parameters.C}} {{inputs.parameters.D}}"] + + volumes: + - name: outer + emptyDir: {} + - name: inner + emptyDir: {} diff --git a/test/e2e/functional/retry-with-artifacts.yaml b/test/e2e/functional/retry-with-artifacts.yaml index 7aa5dcd37421..4a509d568504 100644 --- a/test/e2e/functional/retry-with-artifacts.yaml +++ b/test/e2e/functional/retry-with-artifacts.yaml @@ -23,7 +23,7 @@ spec: container: image: docker/whalesay:latest command: [sh, -c] - args: ["cowsay hello world | tee /tmp/hello_world.txt"] + args: ["sleep 1; cowsay hello world | tee /tmp/hello_world.txt"] outputs: artifacts: - name: hello-art diff --git a/test/e2e/lintfail/disallow-unknown.yaml b/test/e2e/lintfail/disallow-unknown.yaml new file mode 100644 index 000000000000..4d7c349cbf7c --- /dev/null +++ b/test/e2e/lintfail/disallow-unknown.yaml @@ -0,0 +1,15 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: disallow-unknown- +spec: + entrypoint: whalesay + templates: + - name: whalesay + container: + image: docker/whalesay:latest + command: + - cowsay + args: + - hello world + someExtraField: foo diff --git a/test/e2e/expectedfailures/invalid-spec.yaml b/test/e2e/lintfail/invalid-spec.yaml similarity index 100% rename from test/e2e/expectedfailures/invalid-spec.yaml rename to test/e2e/lintfail/invalid-spec.yaml diff --git a/test/e2e/expectedfailures/maformed-spec.yaml b/test/e2e/lintfail/malformed-spec.yaml similarity index 100% rename from test/e2e/expectedfailures/maformed-spec.yaml rename to test/e2e/lintfail/malformed-spec.yaml diff --git a/test/e2e/ui/ui-dag-with-params.yaml b/test/e2e/ui/ui-dag-with-params.yaml index a954c0a8bb94..9756cda593e3 100644 --- a/test/e2e/ui/ui-dag-with-params.yaml +++ b/test/e2e/ui/ui-dag-with-params.yaml @@ -3,24 +3,53 @@ kind: Workflow metadata: generateName: ui-dag-with-params- spec: - entrypoint: diamond + entrypoint: pipeline + templates: - - name: diamond - dag: - tasks: - - name: A - template: nested-diamond - arguments: - parameters: [{name: message, value: A}] - - name: nested-diamond + - name: echo inputs: parameters: - name: message + container: + image: alpine:latest + command: [echo, "{{inputs.parameters.message}}"] + + - name: subpipeline-a dag: tasks: - - name: A + - name: A1 template: echo - - name: echo - container: - image: alpine:3.7 - command: [echo, "hello"] + arguments: + parameters: [{name: message, value: "Hello World!"}] + - name: A2 + template: echo + arguments: + parameters: [{name: message, value: "Hello World!"}] + + - name: subpipeline-b + dag: + tasks: + - name: B1 + template: echo + arguments: + parameters: [{name: message, value: "Hello World!"}] + - name: B2 + template: echo + dependencies: [B1] + arguments: + parameters: [{name: message, value: "Hello World!"}] + withItems: + - 0 + - 1 + + - name: pipeline + dag: + tasks: + - name: A + template: subpipeline-a + withItems: + - 0 + - 1 + - name: B + dependencies: [A] + template: subpipeline-b diff --git a/test/e2e/ui/ui-nested-steps.yaml b/test/e2e/ui/ui-nested-steps.yaml index aeb03da41e1f..c091c6827a24 100644 --- a/test/e2e/ui/ui-nested-steps.yaml +++ b/test/e2e/ui/ui-nested-steps.yaml @@ -5,6 +5,9 @@ metadata: generateName: ui-nested-steps- spec: entrypoint: ui-nested-steps + volumes: + - name: workdir + emptyDir: {} templates: - name: ui-nested-steps steps: @@ -24,14 +27,17 @@ spec: - name: locate-faces container: image: alpine:latest - command: ["sh", "-c"] + command: [sh, -c] args: - - echo '[1, 2, 3]' > /result.json + - echo '[1, 2, 3]' > /workdir/result.json + volumeMounts: + - name: workdir + mountPath: /workdir outputs: parameters: - name: imagemagick-commands valueFrom: - path: /result.json + path: /workdir/result.json - name: handle-individual-faces steps: diff --git a/test/e2e/wait_test.go b/test/e2e/wait_test.go index e375957e7cd0..661cbeffeea2 100644 --- a/test/e2e/wait_test.go +++ b/test/e2e/wait_test.go @@ -7,7 +7,7 @@ import ( "os" "testing" - "github.com/argoproj/argo/cmd/argo/commands" + "github.com/cyrusbiotechnology/argo/cmd/argo/commands" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/suite" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" diff --git a/test/e2e/workflow_test.go b/test/e2e/workflow_test.go index 4f3292a70f84..56a6a7f5419a 100644 --- a/test/e2e/workflow_test.go +++ b/test/e2e/workflow_test.go @@ -8,7 +8,7 @@ import ( "testing" "time" - "github.com/argoproj/argo/cmd/argo/commands" + "github.com/cyrusbiotechnology/argo/cmd/argo/commands" "github.com/stretchr/testify/suite" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) diff --git a/test/test.go b/test/test.go index 7796bdfc0e9d..e4a56fd8345e 100644 --- a/test/test.go +++ b/test/test.go @@ -5,7 +5,7 @@ import ( "path/filepath" "runtime" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" "github.com/ghodss/yaml" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" ) diff --git a/ui/README.md b/ui/README.md index 1a4cc30425f9..18e7d185fb3f 100644 --- a/ui/README.md +++ b/ui/README.md @@ -1,3 +1,3 @@ # Argo UI -Moved to https://github.com/argoproj/argo-ui +Moved to https://github.com/cyrusbiotechnology/argo-ui diff --git a/util/archive/archive.go b/util/archive/archive.go new file mode 100644 index 000000000000..e10820777fed --- /dev/null +++ b/util/archive/archive.go @@ -0,0 +1,131 @@ +package archive + +import ( + "archive/tar" + "compress/gzip" + "io" + "os" + "path/filepath" + + "github.com/cyrusbiotechnology/argo/errors" + "github.com/cyrusbiotechnology/argo/util" + log "github.com/sirupsen/logrus" +) + +type flusher interface { + Flush() error +} + +// TarGzToWriter tar.gz's the source path to the supplied writer +func TarGzToWriter(sourcePath string, w io.Writer) error { + sourcePath, err := filepath.Abs(sourcePath) + if err != nil { + return errors.InternalErrorf("getting absolute path: %v", err) + } + log.Infof("Taring %s", sourcePath) + sourceFi, err := os.Stat(sourcePath) + if err != nil { + if os.IsNotExist(err) { + return errors.New(errors.CodeNotFound, err.Error()) + } + return errors.InternalWrapError(err) + } + if !sourceFi.Mode().IsRegular() && !sourceFi.IsDir() { + return errors.InternalErrorf("%s is not a regular file or directory", sourcePath) + } + if flush, ok := w.(flusher); ok { + defer func() { _ = flush.Flush() }() + } + gzw := gzip.NewWriter(w) + defer util.Close(gzw) + tw := tar.NewWriter(gzw) + defer util.Close(tw) + + if sourceFi.IsDir() { + return tarDir(sourcePath, tw) + } + return tarFile(sourcePath, tw) +} + +func tarDir(sourcePath string, tw *tar.Writer) error { + baseName := filepath.Base(sourcePath) + return filepath.Walk(sourcePath, func(fpath string, info os.FileInfo, err error) error { + if err != nil { + return errors.InternalWrapError(err) + } + // build the name to be used in the archive + nameInArchive, err := filepath.Rel(sourcePath, fpath) + if err != nil { + return errors.InternalWrapError(err) + } + nameInArchive = filepath.Join(baseName, nameInArchive) + log.Infof("writing %s", nameInArchive) + + var header *tar.Header + if (info.Mode() & os.ModeSymlink) != 0 { + linkTarget, err := os.Readlink(fpath) + if err != nil { + return errors.InternalWrapError(err) + } + header, err = tar.FileInfoHeader(info, filepath.ToSlash(linkTarget)) + if err != nil { + return errors.InternalWrapError(err) + } + } else { + header, err = tar.FileInfoHeader(info, info.Name()) + if err != nil { + return errors.InternalWrapError(err) + } + } + header.Name = nameInArchive + + err = tw.WriteHeader(header) + if err != nil { + return errors.InternalWrapError(err) + } + if !info.Mode().IsRegular() { + return nil + } + f, err := os.Open(fpath) + if err != nil { + return errors.InternalWrapError(err) + } + + // copy file data into tar writer + _, err = io.Copy(tw, f) + closeErr := f.Close() + if err != nil { + return err + } + if closeErr != nil { + return closeErr + } + return nil + }) +} + +func tarFile(sourcePath string, tw *tar.Writer) error { + f, err := os.Open(sourcePath) + if err != nil { + return errors.InternalWrapError(err) + } + defer util.Close(f) + info, err := f.Stat() + if err != nil { + return errors.InternalWrapError(err) + } + header, err := tar.FileInfoHeader(info, f.Name()) + if err != nil { + return errors.InternalWrapError(err) + } + header.Name = filepath.Base(sourcePath) + err = tw.WriteHeader(header) + if err != nil { + return errors.InternalWrapError(err) + } + _, err = io.Copy(tw, f) + if err != nil { + return err + } + return nil +} diff --git a/util/archive/archive_test.go b/util/archive/archive_test.go new file mode 100644 index 000000000000..2b4766b01fe0 --- /dev/null +++ b/util/archive/archive_test.go @@ -0,0 +1,60 @@ +package archive + +import ( + "bufio" + "crypto/rand" + "encoding/hex" + "os" + "path/filepath" + "testing" + + log "github.com/sirupsen/logrus" + "github.com/stretchr/testify/assert" +) + +func tempFile(dir, prefix, suffix string) (*os.File, error) { + if dir == "" { + dir = os.TempDir() + } else { + os.MkdirAll(dir, 0700) + } + randBytes := make([]byte, 16) + rand.Read(randBytes) + filePath := filepath.Join(dir, prefix+hex.EncodeToString(randBytes)+suffix) + return os.Create(filePath) +} + +func TestTarDirectory(t *testing.T) { + f, err := tempFile(os.TempDir()+"/argo-test", "dir-", ".tgz") + assert.Nil(t, err) + log.Infof("Taring to %s", f.Name()) + w := bufio.NewWriter(f) + + err = TarGzToWriter("../../test/e2e", w) + assert.Nil(t, err) + + err = f.Close() + assert.Nil(t, err) +} + +func TestTarFile(t *testing.T) { + data, err := tempFile(os.TempDir()+"/argo-test", "file-", "") + assert.Nil(t, err) + _, err = data.WriteString("hello world") + assert.Nil(t, err) + data.Close() + + dataTarPath := data.Name() + ".tgz" + f, err := os.Create(dataTarPath) + assert.Nil(t, err) + log.Infof("Taring to %s", f.Name()) + w := bufio.NewWriter(f) + + err = TarGzToWriter(data.Name(), w) + assert.Nil(t, err) + err = os.Remove(data.Name()) + assert.Nil(t, err) + + err = f.Close() + assert.Nil(t, err) +} diff --git a/util/cmd/cmd.go b/util/cmd/cmd.go index b2adc8b108c6..94d72e696cce 100644 --- a/util/cmd/cmd.go +++ b/util/cmd/cmd.go @@ -9,7 +9,7 @@ import ( "os/user" "strings" - "github.com/argoproj/argo" + "github.com/cyrusbiotechnology/argo" log "github.com/sirupsen/logrus" "github.com/spf13/cobra" ) diff --git a/util/file/fileutil.go b/util/file/fileutil.go new file mode 100644 index 000000000000..37f6a56179c2 --- /dev/null +++ b/util/file/fileutil.go @@ -0,0 +1,87 @@ +package file + +import ( + "archive/tar" + "bytes" + "compress/gzip" + "encoding/base64" + "io" + "io/ioutil" + "strings" + + log "github.com/sirupsen/logrus" +) + +type TarReader interface { + Next() (*tar.Header, error) +} + +// ExistsInTar return true if file or directory exists in tar +func ExistsInTar(sourcePath string, tarReader TarReader) bool { + sourcePath = strings.Trim(sourcePath, "/") + for { + hdr, err := tarReader.Next() + if err == io.EOF { + break + } + if err != nil { + return false + } + if hdr.FileInfo().IsDir() && strings.Contains(sourcePath, strings.Trim(hdr.Name, "/")) { + return true + } + if strings.Contains(sourcePath, hdr.Name) && hdr.Size > 0 { + return true + } + } + return false +} + +//Close the file +func close(f io.Closer) { + err := f.Close() + if err != nil { + log.Warnf("Failed to close the file/writer/reader. %v", err) + } +} + +// CompressEncodeString will return the compressed string with base64 encoded +func CompressEncodeString(content string) string { + return base64.StdEncoding.EncodeToString(CompressContent([]byte(content))) +} + +// DecodeDecompressString will return decode and decompress the +func DecodeDecompressString(content string) (string, error) { + + buf, err := base64.StdEncoding.DecodeString(content) + if err != nil { + return "", err + } + dBuf, err := DecompressContent(buf) + if err != nil { + return "", err + } + return string(dBuf), nil +} + +// CompressContent will compress the byte array using zip writer +func CompressContent(content []byte) []byte { + var buf bytes.Buffer + zipWriter := gzip.NewWriter(&buf) + + _, err := zipWriter.Write(content) + if err != nil { + log.Warnf("Error in compressing: %v", err) + } + close(zipWriter) + return buf.Bytes() +} + +// DecompressContent will return the uncompressed content +func DecompressContent(content []byte) ([]byte, error) { + + buf := bytes.NewReader(content) + gZipReader, _ := gzip.NewReader(buf) + defer close(gZipReader) + return ioutil.ReadAll(gZipReader) +} diff --git a/util/file/fileutil_test.go b/util/file/fileutil_test.go new file mode 100644 index 000000000000..f5b1fd1bae82 --- /dev/null +++ b/util/file/fileutil_test.go @@ -0,0 +1,121 @@ +package file_test + +import ( + "archive/tar" + "bytes" + "github.com/cyrusbiotechnology/argo/util/file" + "github.com/stretchr/testify/assert" + "os" + "testing" +) + +// TestResubmitWorkflowWithOnExit ensures we do not carry over the onExit node even if successful +func TestCompressContentString(t *testing.T) { + content := "{\"pod-limits-rrdm8-591645159\":{\"id\":\"pod-limits-rrdm8-591645159\",\"name\":\"pod-limits-rrdm8[0]." + + "run-pod(0:0)\",\"displayName\":\"run-pod(0:0)\",\"type\":\"Pod\",\"templateName\":\"run-pod\",\"phase\":" + + "\"Succeeded\",\"boundaryID\":\"pod-limits-rrdm8\",\"startedAt\":\"2019-03-07T19:14:50Z\",\"finishedAt\":" + + "\"2019-03-07T19:14:55Z\"}}" + + compString := file.CompressEncodeString(content) + + resultString, _ := file.DecodeDecompressString(compString) + + assert.Equal(t, content, resultString) +} + +func TestExistsInTar(t *testing.T) { + type fakeFile struct { + name, body string + isDir bool + } + + newTarReader := func(t *testing.T, files []fakeFile) *tar.Reader { + var buf bytes.Buffer + writer := tar.NewWriter(&buf) + for _, f := range files { + mode := os.FileMode(0600) + if f.isDir { + mode |= os.ModeDir + } + hdr := tar.Header{Name: f.name, Mode: int64(mode), Size: int64(len(f.body))} + err := writer.WriteHeader(&hdr) + assert.Nil(t, err) + _, err = writer.Write([]byte(f.body)) + assert.Nil(t, err) + } + err := writer.Close() + assert.Nil(t, err) + return tar.NewReader(&buf) + } + + type TestCase struct { + sourcePath string + expected bool + files []fakeFile + } + + tests := []TestCase{ + { + sourcePath: "/root.txt", expected: true, + files: []fakeFile{{name: "root.txt", body: "file in the root"}}, + }, + { + sourcePath: "/tmp/file/in/subfolder.txt", expected: true, + files: []fakeFile{{name: "subfolder.txt", body: "a file in a subfolder"}}, + }, + { + sourcePath: "/root", expected: true, + files: []fakeFile{ + {name: "root/", isDir: true}, + {name: "root/a.txt", body: "a"}, + {name: "root/b.txt", body: "b"}, + }, + }, + { + sourcePath: "/tmp/subfolder", expected: true, + files: []fakeFile{ + {name: "subfolder/", isDir: true}, + {name: "subfolder/a.txt", body: "a"}, + {name: "subfolder/b.txt", body: "b"}, + }, + }, + { + // should an empty tar return true?? + sourcePath: "/tmp/empty", expected: true, + files: []fakeFile{ + {name: "empty/", isDir: true}, + }, + }, + { + sourcePath: "/tmp/folder/that", expected: false, + files: []fakeFile{ + {name: "this/", isDir: true}, + {name: "this/a.txt", body: "a"}, + {name: "this/b.txt", body: "b"}, + }, + }, + { + sourcePath: "/empty.txt", expected: false, + files: []fakeFile{ + // fails because empty.txt is empty + {name: "empty.txt", body: ""}, + }, + }, + { + sourcePath: "/tmp/empty.txt", expected: false, + files: []fakeFile{ + // fails because empty.txt is empty + {name: "empty.txt", body: ""}, + }, + }, + } + for _, tc := range tests { + tc := tc + t.Run("source path "+tc.sourcePath, func(t *testing.T) { + t.Parallel() + tarReader := newTarReader(t, tc.files) + actual := file.ExistsInTar(tc.sourcePath, tarReader) + assert.Equalf(t, tc.expected, actual, "sourcePath %s not found", tc.sourcePath) + }) + } +} diff --git a/util/retry/retry.go b/util/retry/retry.go index d968c4b974c8..792650d2f1d0 100644 --- a/util/retry/retry.go +++ b/util/retry/retry.go @@ -6,7 +6,7 @@ import ( "strings" "time" - argoerrs "github.com/argoproj/argo/errors" + argoerrs "github.com/cyrusbiotechnology/argo/errors" apierr "k8s.io/apimachinery/pkg/api/errors" "k8s.io/apimachinery/pkg/util/wait" ) diff --git a/workflow/artifacts/artifactory/artifactory.go b/workflow/artifacts/artifactory/artifactory.go index e5c626f632cb..4e9bb7f9368a 100644 --- a/workflow/artifacts/artifactory/artifactory.go +++ b/workflow/artifacts/artifactory/artifactory.go @@ -5,8 +5,8 @@ import ( "net/http" "os" - "github.com/argoproj/argo/errors" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" ) type ArtifactoryArtifactDriver struct { diff --git a/workflow/artifacts/artifactory/artifactory_test.go b/workflow/artifacts/artifactory/artifactory_test.go index d2f6198dbca3..3b241403a1ba 100644 --- a/workflow/artifacts/artifactory/artifactory_test.go +++ b/workflow/artifacts/artifactory/artifactory_test.go @@ -6,8 +6,8 @@ import ( "testing" "time" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - art "github.com/argoproj/argo/workflow/artifacts/artifactory" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + art "github.com/cyrusbiotechnology/argo/workflow/artifacts/artifactory" "github.com/stretchr/testify/assert" ) diff --git a/workflow/artifacts/artifacts.go b/workflow/artifacts/artifacts.go index f8ba81e7a6d9..8de8f89c6c43 100644 --- a/workflow/artifacts/artifacts.go +++ b/workflow/artifacts/artifacts.go @@ -1,7 +1,7 @@ package executor import ( - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" ) // ArtifactDriver is the interface for loading and saving of artifacts diff --git a/workflow/artifacts/gcs/gcs.go b/workflow/artifacts/gcs/gcs.go new file mode 100644 index 000000000000..d0404e70ddef --- /dev/null +++ b/workflow/artifacts/gcs/gcs.go @@ -0,0 +1,130 @@ +package gcs + +import ( + "cloud.google.com/go/storage" + "context" + "errors" + argoErrors "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/util" + log "github.com/sirupsen/logrus" + "google.golang.org/api/option" + "io" + "os" +) + +type GCSArtifactDriver struct { + Context context.Context + CredsJSONData []byte +} + +func (gcsDriver *GCSArtifactDriver) newGcsClient() (client *storage.Client, err error) { + gcsDriver.Context = context.Background() + + client, err = storage.NewClient(gcsDriver.Context, option.WithCredentialsJSON(gcsDriver.CredsJSONData)) + if err != nil { + return nil, argoErrors.InternalWrapError(err) + } + return + +} + +func (gcsDriver *GCSArtifactDriver) saveToFile(inputArtifact *wfv1.Artifact, filePath string) error { + + log.Infof("Loading from GCS (gs://%s/%s) to %s", + inputArtifact.GCS.Bucket, inputArtifact.GCS.Key, filePath) + + stat, err := os.Stat(filePath) + if err != nil && !os.IsNotExist(err) { + return err + } + + if stat != nil && stat.IsDir() { + return errors.New("output artifact path is a directory") + } + + outputFile, err := os.OpenFile(filePath, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0600) + if err != nil { + return err + } + + gcsClient, err := gcsDriver.newGcsClient() + if err != nil { + return err + } + + bucket := gcsClient.Bucket(inputArtifact.GCS.Bucket) + object := bucket.Object(inputArtifact.GCS.Key) + + r, err := object.NewReader(gcsDriver.Context) + if err != nil { + return err + } + defer util.Close(r) + + _, err = io.Copy(outputFile, r) + if err != nil { + return err + } + + err = outputFile.Close() + if err != nil { + return err + } + return nil +} + +func (gcsDriver *GCSArtifactDriver) saveToGCS(outputArtifact *wfv1.Artifact, filePath string) error { + + log.Infof("Saving to GCS (gs://%s/%s)", + outputArtifact.GCS.Bucket, outputArtifact.GCS.Key) + + gcsClient, err := gcsDriver.newGcsClient() + if err != nil { + return err + } + + inputFile, err := os.Open(filePath) + if err != nil { + return err + } + + stat, err := os.Stat(filePath) + if err != nil { + return err + } + + if stat.IsDir() { + return errors.New("only single files can be saved to GCS, not entire directories") + } + + defer util.Close(inputFile) + + bucket := gcsClient.Bucket(outputArtifact.GCS.Bucket) + object := bucket.Object(outputArtifact.GCS.Key) + + w := object.NewWriter(gcsDriver.Context) + _, err = io.Copy(w, inputFile) + if err != nil { + return err + } + + err = w.Close() + if err != nil { + return err + } + return nil + +} + +func (gcsDriver *GCSArtifactDriver) Load(inputArtifact *wfv1.Artifact, path string) error { + + err := gcsDriver.saveToFile(inputArtifact, path) + return err +} + +func (gcsDriver *GCSArtifactDriver) Save(path string, outputArtifact *wfv1.Artifact) error { + + err := gcsDriver.saveToGCS(outputArtifact, path) + return err +} diff --git a/workflow/artifacts/git/git.go b/workflow/artifacts/git/git.go index 3f1cf10ba6a7..e5403dea355f 100644 --- a/workflow/artifacts/git/git.go +++ b/workflow/artifacts/git/git.go @@ -15,15 +15,16 @@ import ( "gopkg.in/src-d/go-git.v4/plumbing/transport/http" ssh2 "gopkg.in/src-d/go-git.v4/plumbing/transport/ssh" - "github.com/argoproj/argo/errors" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" ) // GitArtifactDriver is the artifact driver for a git repo type GitArtifactDriver struct { - Username string - Password string - SSHPrivateKey string + Username string + Password string + SSHPrivateKey string + InsecureIgnoreHostKey bool } // Load download artifacts from an git URL @@ -34,7 +35,9 @@ func (g *GitArtifactDriver) Load(inputArtifact *wfv1.Artifact, path string) erro return errors.InternalWrapError(err) } auth := &ssh2.PublicKeys{User: "git", Signer: signer} - auth.HostKeyCallback = ssh.InsecureIgnoreHostKey() + if g.InsecureIgnoreHostKey { + auth.HostKeyCallback = ssh.InsecureIgnoreHostKey() + } return gitClone(path, inputArtifact, auth, g.SSHPrivateKey) } if g.Username != "" || g.Password != "" { @@ -49,7 +52,7 @@ func (g *GitArtifactDriver) Save(path string, outputArtifact *wfv1.Artifact) err return errors.Errorf(errors.CodeBadRequest, "Git output artifacts unsupported") } -func writePrivateKey(key string) error { +func writePrivateKey(key string, insecureIgnoreHostKey bool) error { usr, err := user.Current() if err != nil { return errors.InternalWrapError(err) @@ -60,12 +63,14 @@ func writePrivateKey(key string) error { return errors.InternalWrapError(err) } - sshConfig := `Host * + if insecureIgnoreHostKey { + sshConfig := `Host * StrictHostKeyChecking no UserKnownHostsFile /dev/null` - err = ioutil.WriteFile(fmt.Sprintf("%s/config", sshDir), []byte(sshConfig), 0644) - if err != nil { - return errors.InternalWrapError(err) + err = ioutil.WriteFile(fmt.Sprintf("%s/config", sshDir), []byte(sshConfig), 0644) + if err != nil { + return errors.InternalWrapError(err) + } } err = ioutil.WriteFile(fmt.Sprintf("%s/id_rsa", sshDir), []byte(key), 0600) if err != nil { @@ -101,7 +106,7 @@ func gitClone(path string, inputArtifact *wfv1.Artifact, auth transport.AuthMeth } log.Errorf("`%s` stdout:\n%s", cmd.Args, string(output)) if privateKey != "" { - err := writePrivateKey(privateKey) + err := writePrivateKey(privateKey, inputArtifact.Git.InsecureIgnoreHostKey) if err != nil { return errors.InternalWrapError(err) } diff --git a/workflow/artifacts/hdfs/hdfs.go b/workflow/artifacts/hdfs/hdfs.go new file mode 100644 index 000000000000..8d31c8971841 --- /dev/null +++ b/workflow/artifacts/hdfs/hdfs.go @@ -0,0 +1,217 @@ +package hdfs + +import ( + "fmt" + "os" + "path/filepath" + + "github.com/argoproj/pkg/file" + "gopkg.in/jcmturner/gokrb5.v5/credentials" + "gopkg.in/jcmturner/gokrb5.v5/keytab" + + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/util" + "github.com/cyrusbiotechnology/argo/workflow/common" +) + +// ArtifactDriver is a driver for HDFS +type ArtifactDriver struct { + Addresses []string // comma-separated name nodes + Path string + Force bool + HDFSUser string + KrbOptions *KrbOptions +} + +// KrbOptions is options for Kerberos +type KrbOptions struct { + CCacheOptions *CCacheOptions + KeytabOptions *KeytabOptions + Config string + ServicePrincipalName string +} + +// CCacheOptions is options for ccache +type CCacheOptions struct { + CCache credentials.CCache +} + +// KeytabOptions is options for keytab +type KeytabOptions struct { + Keytab keytab.Keytab + Username string + Realm string +} + +// ValidateArtifact validates HDFS artifact +func ValidateArtifact(errPrefix string, art *wfv1.HDFSArtifact) error { + if len(art.Addresses) == 0 { + return errors.Errorf(errors.CodeBadRequest, "%s.addresses is required", errPrefix) + } + if art.Path == "" { + return errors.Errorf(errors.CodeBadRequest, "%s.path is required", errPrefix) + } + if !filepath.IsAbs(art.Path) { + return errors.Errorf(errors.CodeBadRequest, "%s.path must be a absolute file path", errPrefix) + } + + hasKrbCCache := art.KrbCCacheSecret != nil + hasKrbKeytab := art.KrbKeytabSecret != nil + + if art.HDFSUser == "" && !hasKrbCCache && !hasKrbKeytab { + return errors.Errorf(errors.CodeBadRequest, "either %s.hdfsUser, %s.krbCCacheSecret or %s.krbKeytabSecret is required", errPrefix, errPrefix, errPrefix) + } + if hasKrbKeytab && (art.KrbServicePrincipalName == "" || art.KrbConfigConfigMap == nil || art.KrbUsername == "" || art.KrbRealm == "") { + return errors.Errorf(errors.CodeBadRequest, "%s.krbServicePrincipalName, %s.krbConfigConfigMap, %s.krbUsername and %s.krbRealm are required with %s.krbKeytabSecret", errPrefix, errPrefix, errPrefix, errPrefix, errPrefix) + } + if hasKrbCCache && (art.KrbServicePrincipalName == "" || art.KrbConfigConfigMap == nil) { + return errors.Errorf(errors.CodeBadRequest, "%s.krbServicePrincipalName and %s.krbConfigConfigMap are required with %s.krbCCacheSecret", errPrefix, errPrefix, errPrefix) + } + return nil +} + +// CreateDriver constructs ArtifactDriver +func CreateDriver(ci common.ResourceInterface, art *wfv1.HDFSArtifact) (*ArtifactDriver, error) { + var krbConfig string + var krbOptions *KrbOptions + var err error + + namespace := ci.GetNamespace() + + if art.KrbConfigConfigMap != nil && art.KrbConfigConfigMap.Name != "" { + krbConfig, err = ci.GetConfigMapKey(namespace, art.KrbConfigConfigMap.Name, art.KrbConfigConfigMap.Key) + if err != nil { + return nil, err + } + } + if art.KrbCCacheSecret != nil && art.KrbCCacheSecret.Name != "" { + bytes, err := ci.GetSecretFromVolMount(art.KrbCCacheSecret.Name, art.KrbCCacheSecret.Key) + if err != nil { + return nil, err + } + ccache, err := credentials.ParseCCache(bytes) + if err != nil { + return nil, err + } + krbOptions = &KrbOptions{ + CCacheOptions: &CCacheOptions{ + CCache: ccache, + }, + Config: krbConfig, + ServicePrincipalName: art.KrbServicePrincipalName, + } + } + if art.KrbKeytabSecret != nil && art.KrbKeytabSecret.Name != "" { + bytes, err := ci.GetSecretFromVolMount(art.KrbKeytabSecret.Name, art.KrbKeytabSecret.Key) + if err != nil { + return nil, err + } + ktb, err := keytab.Parse(bytes) + if err != nil { + return nil, err + } + krbOptions = &KrbOptions{ + KeytabOptions: &KeytabOptions{ + Keytab: ktb, + Username: art.KrbUsername, + Realm: art.KrbRealm, + }, + Config: krbConfig, + ServicePrincipalName: art.KrbServicePrincipalName, + } + } + + driver := ArtifactDriver{ + Addresses: art.Addresses, + Path: art.Path, + Force: art.Force, + HDFSUser: art.HDFSUser, + KrbOptions: krbOptions, + } + return &driver, nil +} + +// Load downloads artifacts from HDFS compliant storage +func (driver *ArtifactDriver) Load(inputArtifact *wfv1.Artifact, path string) error { + hdfscli, err := createHDFSClient(driver.Addresses, driver.HDFSUser, driver.KrbOptions) + if err != nil { + return err + } + defer util.Close(hdfscli) + + srcStat, err := hdfscli.Stat(driver.Path) + if err != nil { + return err + } + if srcStat.IsDir() { + return fmt.Errorf("HDFS artifact does not suppot directory copy") + } + + _, err = os.Stat(path) + if err != nil && !os.IsNotExist(err) { + return err + } + + if os.IsNotExist(err) { + dirPath := filepath.Dir(driver.Path) + if dirPath != "." && dirPath != "/" { + // Follow umask for the permission + err = os.MkdirAll(dirPath, 0777) + if err != nil { + return err + } + } + } else { + if driver.Force { + err = os.Remove(path) + if err != nil && !os.IsNotExist(err) { + return err + } + } + } + + return hdfscli.CopyToLocal(driver.Path, path) +} + +// Save saves an artifact to HDFS compliant storage +func (driver *ArtifactDriver) Save(path string, outputArtifact *wfv1.Artifact) error { + hdfscli, err := createHDFSClient(driver.Addresses, driver.HDFSUser, driver.KrbOptions) + if err != nil { + return err + } + defer util.Close(hdfscli) + + isDir, err := file.IsDirectory(path) + if err != nil { + return err + } + if isDir { + return fmt.Errorf("HDFS artifact does not suppot directory copy") + } + + _, err = hdfscli.Stat(driver.Path) + if err != nil && !os.IsNotExist(err) { + return err + } + + if os.IsNotExist(err) { + dirPath := filepath.Dir(driver.Path) + if dirPath != "." && dirPath != "/" { + // Follow umask for the permission + err = hdfscli.MkdirAll(dirPath, 0777) + if err != nil { + return err + } + } + } else { + if driver.Force { + err = hdfscli.Remove(driver.Path) + if err != nil && !os.IsNotExist(err) { + return err + } + } + } + + return hdfscli.CopyToRemote(path, driver.Path) +} diff --git a/workflow/artifacts/hdfs/util.go b/workflow/artifacts/hdfs/util.go new file mode 100644 index 000000000000..3af330ae012e --- /dev/null +++ b/workflow/artifacts/hdfs/util.go @@ -0,0 +1,53 @@ +package hdfs + +import ( + "fmt" + + "github.com/colinmarc/hdfs" + krb "gopkg.in/jcmturner/gokrb5.v5/client" + "gopkg.in/jcmturner/gokrb5.v5/config" +) + +func createHDFSClient(addresses []string, user string, krbOptions *KrbOptions) (*hdfs.Client, error) { + options := hdfs.ClientOptions{ + Addresses: addresses, + } + + if krbOptions != nil { + krbClient, err := createKrbClient(krbOptions) + if err != nil { + return nil, err + } + options.KerberosClient = krbClient + options.KerberosServicePrincipleName = krbOptions.ServicePrincipalName + } else { + options.User = user + } + + return hdfs.NewClient(options) +} + +func createKrbClient(krbOptions *KrbOptions) (*krb.Client, error) { + krbConfig, err := config.NewConfigFromString(krbOptions.Config) + if err != nil { + return nil, err + } + + if krbOptions.CCacheOptions != nil { + client, err := krb.NewClientFromCCache(krbOptions.CCacheOptions.CCache) + if err != nil { + return nil, err + } + return client.WithConfig(krbConfig), nil + } else if krbOptions.KeytabOptions != nil { + client := krb.NewClientWithKeytab(krbOptions.KeytabOptions.Username, krbOptions.KeytabOptions.Realm, krbOptions.KeytabOptions.Keytab) + client = *client.WithConfig(krbConfig) + err = client.Login() + if err != nil { + return nil, err + } + return &client, nil + } + + return nil, fmt.Errorf("Failed to get a Kerberos client") +} diff --git a/workflow/artifacts/http/http.go b/workflow/artifacts/http/http.go index 10f12ff4a666..d2e814c77b9c 100644 --- a/workflow/artifacts/http/http.go +++ b/workflow/artifacts/http/http.go @@ -1,9 +1,9 @@ package http import ( - "github.com/argoproj/argo/errors" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/workflow/common" + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/common" ) // HTTPArtifactDriver is the artifact driver for a HTTP URL diff --git a/workflow/artifacts/raw/raw.go b/workflow/artifacts/raw/raw.go index f40d5e985732..cab3b1571f2b 100644 --- a/workflow/artifacts/raw/raw.go +++ b/workflow/artifacts/raw/raw.go @@ -1,8 +1,8 @@ package raw import ( - "github.com/argoproj/argo/errors" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" "os" ) diff --git a/workflow/artifacts/raw/raw_test.go b/workflow/artifacts/raw/raw_test.go index a846d7633124..710b44a0f3b9 100644 --- a/workflow/artifacts/raw/raw_test.go +++ b/workflow/artifacts/raw/raw_test.go @@ -1,8 +1,8 @@ package raw_test import ( - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/workflow/artifacts/raw" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/artifacts/raw" "github.com/stretchr/testify/assert" "io/ioutil" "os" diff --git a/workflow/artifacts/s3/s3.go b/workflow/artifacts/s3/s3.go index cbbe325d9f3d..2a03f2bc18c1 100644 --- a/workflow/artifacts/s3/s3.go +++ b/workflow/artifacts/s3/s3.go @@ -1,13 +1,14 @@ package s3 import ( - "github.com/argoproj/pkg/file" - argos3 "github.com/argoproj/pkg/s3" - log "github.com/sirupsen/logrus" + "time" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + log "github.com/sirupsen/logrus" "k8s.io/apimachinery/pkg/util/wait" - "time" + + "github.com/argoproj/pkg/file" + argos3 "github.com/argoproj/pkg/s3" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" ) // S3ArtifactDriver is a driver for AWS S3 @@ -33,9 +34,9 @@ func (s3Driver *S3ArtifactDriver) newS3Client() (argos3.S3Client, error) { // Load downloads artifacts from S3 compliant storage func (s3Driver *S3ArtifactDriver) Load(inputArtifact *wfv1.Artifact, path string) error { - err := wait.ExponentialBackoff(wait.Backoff{Duration: time.Millisecond * 10, Factor: 2.0, Steps: 5, Jitter: 0.1}, + err := wait.ExponentialBackoff(wait.Backoff{Duration: time.Second * 2, Factor: 2.0, Steps: 5, Jitter: 0.1}, func() (bool, error) { - + log.Infof("S3 Load path: %s, key: %s", path, inputArtifact.S3.Key) s3cli, err := s3Driver.newS3Client() if err != nil { log.Warnf("Failed to create new S3 client: %v", err) @@ -46,7 +47,8 @@ func (s3Driver *S3ArtifactDriver) Load(inputArtifact *wfv1.Artifact, path string return true, nil } if !argos3.IsS3ErrCode(origErr, "NoSuchKey") { - return false, origErr + log.Warnf("Failed get file: %v", origErr) + return false, nil } // If we get here, the error was a NoSuchKey. The key might be a s3 "directory" isDir, err := s3cli.IsDirectory(inputArtifact.S3.Bucket, inputArtifact.S3.Key) @@ -60,6 +62,7 @@ func (s3Driver *S3ArtifactDriver) Load(inputArtifact *wfv1.Artifact, path string } if err = s3cli.GetDirectory(inputArtifact.S3.Bucket, inputArtifact.S3.Key, path); err != nil { + log.Warnf("Failed get directory: %v", err) return false, nil } return true, nil @@ -70,8 +73,9 @@ func (s3Driver *S3ArtifactDriver) Load(inputArtifact *wfv1.Artifact, path string // Save saves an artifact to S3 compliant storage func (s3Driver *S3ArtifactDriver) Save(path string, outputArtifact *wfv1.Artifact) error { - err := wait.ExponentialBackoff(wait.Backoff{Duration: time.Millisecond * 10, Factor: 2.0, Steps: 5, Jitter: 0.1}, + err := wait.ExponentialBackoff(wait.Backoff{Duration: time.Second * 2, Factor: 2.0, Steps: 5, Jitter: 0.1}, func() (bool, error) { + log.Infof("S3 Save path: %s, key: %s", path, outputArtifact.S3.Key) s3cli, err := s3Driver.newS3Client() if err != nil { log.Warnf("Failed to create new S3 client: %v", err) @@ -84,11 +88,14 @@ func (s3Driver *S3ArtifactDriver) Save(path string, outputArtifact *wfv1.Artifac } if isDir { if err = s3cli.PutDirectory(outputArtifact.S3.Bucket, outputArtifact.S3.Key, path); err != nil { + log.Warnf("Failed to put directory: %v", err) + return false, nil + } + } else { + if err = s3cli.PutFile(outputArtifact.S3.Bucket, outputArtifact.S3.Key, path); err != nil { + log.Warnf("Failed to put file: %v", err) return false, nil } - } - if err = s3cli.PutFile(outputArtifact.S3.Bucket, outputArtifact.S3.Key, path); err != nil { - return false, nil } return true, nil }) diff --git a/workflow/common/common.go b/workflow/common/common.go index 432339e10cbe..da6ac92b33d2 100644 --- a/workflow/common/common.go +++ b/workflow/common/common.go @@ -1,9 +1,10 @@ package common import ( + "os" "time" - "github.com/argoproj/argo/pkg/apis/workflow" + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow" ) const ( @@ -29,13 +30,14 @@ const ( // PodMetadataAnnotationsPath is the file path containing pod metadata annotations. Examined by executor PodMetadataAnnotationsPath = PodMetadataMountPath + "/" + PodMetadataAnnotationsVolumePath - // DockerLibVolumeName is the volume name for the /var/lib/docker host path volume - DockerLibVolumeName = "docker-lib" - // DockerLibHostPath is the host directory path containing docker runtime state - DockerLibHostPath = "/var/lib/docker" // DockerSockVolumeName is the volume name for the /var/run/docker.sock host path volume DockerSockVolumeName = "docker-sock" + // GoogleSecretVolumeName is the volume name for the /var/secrets/google volume + GoogleSecretVolumeName = "google-cloud-key" + // EvnVarGoogleSecret contains the name of the google credentials file used fro GCS access + EnvVarGoogleSecret = "GOOGLE_CREDENTIALS_SECRET" + // AnnotationKeyNodeName is the pod metadata annotation key containing the workflow node name AnnotationKeyNodeName = workflow.FullName + "/node-name" // AnnotationKeyNodeMessage is the pod metadata annotation key the executor will use to @@ -49,6 +51,10 @@ const ( // set by the controller and obeyed by the executor. For example, the controller will use this annotation to // signal the executors of daemoned containers that it should terminate. AnnotationKeyExecutionControl = workflow.FullName + "/execution" + //AnnotationKeyErrors is the annotation key containing extended fatal error information + AnnotationKeyErrors = workflow.FullName + "/errors" + //AnnotationKeyWarnings is the annotation key containing extended + AnnotationKeyWarnings = workflow.FullName + "/warnings" // LabelKeyControllerInstanceID is the label the controller will carry forward to workflows/pod labels // for the purposes of workflow segregation @@ -65,10 +71,11 @@ const ( // Each artifact will be named according to its input name (e.g: /argo/inputs/artifacts/CODE) ExecutorArtifactBaseDir = "/argo/inputs/artifacts" - // InitContainerMainFilesystemDir is a path made available to the init container such that the init container - // can access the same volume mounts used in the main container. This is used for the purposes of artifact loading - // (when there is overlapping paths between artifacts and volume mounts) - InitContainerMainFilesystemDir = "/mainctrfs" + // ExecutorMainFilesystemDir is a path made available to the init/wait containers such that they + // can access the same volume mounts used in the main container. This is used for the purposes + // of artifact loading (when there is overlapping paths between artifacts and volume mounts), + // as well as artifact collection by the wait container. + ExecutorMainFilesystemDir = "/mainctrfs" // ExecutorStagingEmptyDir is the path of the emptydir which is used as a staging area to transfer a file between init/main container for script/resource templates ExecutorStagingEmptyDir = "/argo/staging" @@ -81,7 +88,6 @@ const ( // EnvVarPodName contains the name of the pod (currently unused) EnvVarPodName = "ARGO_POD_NAME" - // EnvVarContainerRuntimeExecutor contains the name of the container runtime executor to use, empty is equal to "docker" EnvVarContainerRuntimeExecutor = "ARGO_CONTAINER_RUNTIME_EXECUTOR" // EnvVarDownwardAPINodeIP is the envvar used to get the `status.hostIP` @@ -100,6 +106,9 @@ const ( // ContainerRuntimeExecutorK8sAPI to use the Kubernetes API server as container runtime executor ContainerRuntimeExecutorK8sAPI = "k8sapi" + // ContainerRuntimeExecutorPNS indicates to use process namespace sharing as the container runtime executor + ContainerRuntimeExecutorPNS = "pns" + // Variables that are added to the scope during template execution and can be referenced using {{}} syntax // GlobalVarWorkflowName is a global workflow variable referencing the workflow's metadata.name field @@ -114,6 +123,17 @@ const ( GlobalVarWorkflowCreationTimestamp = "workflow.creationTimestamp" // LocalVarPodName is a step level variable that references the name of the pod LocalVarPodName = "pod.name" + + KubeConfigDefaultMountPath = "/kube/config" + KubeConfigDefaultVolumeName = "kubeconfig" + SecretVolMountPath = "/argo/secret" +) + +// GlobalVarWorkflowRootTags is a list of root tags in workflow which could be used for variable reference +var GlobalVarValidWorkflowVariablePrefix = []string{"item.", "steps.", "inputs.", "outputs.", "pod.", "workflow.", "tasks."} + +var ( + GoogleSecretName = os.Getenv(EnvVarGoogleSecret) ) // ExecutionControl contains execution control parameters for executor to decide how to execute the container @@ -123,3 +143,10 @@ type ExecutionControl struct { // used to support workflow or steps/dag level timeouts. Deadline *time.Time `json:"deadline,omitempty"` } + +type ResourceInterface interface { + GetNamespace() string + GetSecrets(namespace, name, key string) ([]byte, error) + GetSecretFromVolMount(name, key string) ([]byte, error) + GetConfigMapKey(namespace, name, key string) (string, error) +} diff --git a/workflow/common/util.go b/workflow/common/util.go index 88f3431ecd0e..14b04d3a8289 100644 --- a/workflow/common/util.go +++ b/workflow/common/util.go @@ -5,16 +5,16 @@ import ( "encoding/json" "fmt" "io" + "net/http" "os/exec" "regexp" "strconv" "strings" "time" - "github.com/argoproj/argo/errors" - "github.com/argoproj/argo/pkg/apis/workflow" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow" "github.com/ghodss/yaml" + "github.com/gorilla/websocket" log "github.com/sirupsen/logrus" "github.com/valyala/fasttemplate" apiv1 "k8s.io/api/core/v1" @@ -23,11 +23,15 @@ import ( "k8s.io/client-go/kubernetes" "k8s.io/client-go/rest" "k8s.io/client-go/tools/remotecommand" + + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/util" ) // FindOverlappingVolume looks an artifact path, checks if it overlaps with any // user specified volumeMounts in the template, and returns the deepest volumeMount -// (if any). +// (if any). A return value of nil indicates the path is not under any volumeMount. func FindOverlappingVolume(tmpl *wfv1.Template, path string) *apiv1.VolumeMount { if tmpl.Container == nil { return nil @@ -66,6 +70,126 @@ func KillPodContainer(restConfig *rest.Config, namespace string, pod string, con return nil } +// ContainerLogStream returns an io.ReadCloser for a container's log stream using the websocket +// interface. This was implemented in the hopes that we could selectively choose stdout from stderr, +// but due to https://github.com/kubernetes/kubernetes/issues/28167, it is not possible to discern +// stdout from stderr using the K8s API server, so this function is unused, instead preferring the +// pod logs interface from client-go. It's left as a reference for when issue #28167 is eventually +// resolved. +func ContainerLogStream(config *rest.Config, namespace string, pod string, container string) (io.ReadCloser, error) { + clientset, err := kubernetes.NewForConfig(config) + if err != nil { + return nil, errors.InternalWrapError(err) + } + logRequest := clientset.CoreV1().RESTClient().Get(). + Resource("pods"). + Name(pod). + Namespace(namespace). + SubResource("log"). + Param("container", container) + u := logRequest.URL() + switch u.Scheme { + case "https": + u.Scheme = "wss" + case "http": + u.Scheme = "ws" + default: + return nil, errors.Errorf("Malformed URL %s", u.String()) + } + + log.Info(u.String()) + wsrc := websocketReadCloser{ + &bytes.Buffer{}, + } + + wrappedRoundTripper, err := roundTripperFromConfig(config, wsrc.WebsocketCallback) + if err != nil { + return nil, errors.InternalWrapError(err) + } + + // Send the request and let the callback do its work + req := &http.Request{ + Method: http.MethodGet, + URL: u, + } + _, err = wrappedRoundTripper.RoundTrip(req) + if err != nil && !websocket.IsCloseError(err, websocket.CloseNormalClosure) { + return nil, errors.InternalWrapError(err) + } + return &wsrc, nil +} + +type RoundTripCallback func(conn *websocket.Conn, resp *http.Response, err error) error + +type WebsocketRoundTripper struct { + Dialer *websocket.Dialer + Do RoundTripCallback +} + +func (d *WebsocketRoundTripper) RoundTrip(r *http.Request) (*http.Response, error) { + conn, resp, err := d.Dialer.Dial(r.URL.String(), r.Header) + if err == nil { + defer util.Close(conn) + } + return resp, d.Do(conn, resp, err) +} + +func (w *websocketReadCloser) WebsocketCallback(ws *websocket.Conn, resp *http.Response, err error) error { + if err != nil { + if resp != nil && resp.StatusCode != http.StatusOK { + buf := new(bytes.Buffer) + _, _ = buf.ReadFrom(resp.Body) + return errors.InternalErrorf("Can't connect to log endpoint (%d): %s", resp.StatusCode, buf.String()) + } + return errors.InternalErrorf("Can't connect to log endpoint: %s", err.Error()) + } + + for { + _, body, err := ws.ReadMessage() + if len(body) > 0 { + //log.Debugf("%d: %s", msgType, string(body)) + _, writeErr := w.Write(body) + if writeErr != nil { + return writeErr + } + } + if err != nil { + if err == io.EOF { + log.Infof("websocket closed: %v", err) + return nil + } + log.Warnf("websocket error: %v", err) + return err + } + } +} + +func roundTripperFromConfig(config *rest.Config, callback RoundTripCallback) (http.RoundTripper, error) { + tlsConfig, err := rest.TLSConfigFor(config) + if err != nil { + return nil, err + } + // Create a roundtripper which will pass in the final underlying websocket connection to a callback + wsrt := &WebsocketRoundTripper{ + Do: callback, + Dialer: &websocket.Dialer{ + Proxy: http.ProxyFromEnvironment, + TLSClientConfig: tlsConfig, + }, + } + // Make sure we inherit all relevant security headers + return rest.HTTPWrappersForConfig(config, wsrt) +} + +type websocketReadCloser struct { + *bytes.Buffer +} + +func (w *websocketReadCloser) Close() error { + //return w.conn.Close() + return nil +} + // ExecPodContainer runs a command in a container in a pod and returns the remotecommand.Executor func ExecPodContainer(restConfig *rest.Config, namespace string, pod string, container string, stdout bool, stderr bool, command ...string) (remotecommand.Executor, error) { clientset, err := kubernetes.NewForConfig(restConfig) @@ -146,17 +270,23 @@ func ProcessArgs(tmpl *wfv1.Template, args wfv1.Arguments, globalParams, localPa newInputArtifacts[i] = inArt continue } - // artifact must be supplied argArt := args.GetArtifactByName(inArt.Name) - if argArt == nil { - return nil, errors.Errorf(errors.CodeBadRequest, "inputs.artifacts.%s was not supplied", inArt.Name) + if !inArt.Optional { + // artifact must be supplied + if argArt == nil { + return nil, errors.Errorf(errors.CodeBadRequest, "inputs.artifacts.%s was not supplied", inArt.Name) + } + if !argArt.HasLocation() && !validateOnly { + return nil, errors.Errorf(errors.CodeBadRequest, "inputs.artifacts.%s missing location information", inArt.Name) + } } - if !argArt.HasLocation() && !validateOnly { - return nil, errors.Errorf(errors.CodeBadRequest, "inputs.artifacts.%s missing location information", inArt.Name) + if argArt != nil { + argArt.Path = inArt.Path + argArt.Mode = inArt.Mode + newInputArtifacts[i] = *argArt + } else { + newInputArtifacts[i] = inArt } - argArt.Path = inArt.Path - argArt.Mode = inArt.Mode - newInputArtifacts[i] = *argArt } tmpl.Inputs.Artifacts = newInputArtifacts @@ -195,6 +325,22 @@ func substituteParams(tmpl *wfv1.Template, globalParams, localParams map[string] } replaceMap["inputs.parameters."+inParam.Name] = *inParam.Value } + for _, inArt := range globalReplacedTmpl.Inputs.Artifacts { + if inArt.Path != "" { + replaceMap["inputs.artifacts."+inArt.Name+".path"] = inArt.Path + } + } + for _, outArt := range globalReplacedTmpl.Outputs.Artifacts { + if outArt.Path != "" { + replaceMap["outputs.artifacts."+outArt.Name+".path"] = outArt.Path + } + } + for _, param := range globalReplacedTmpl.Outputs.Parameters { + if param.ValueFrom != nil && param.ValueFrom.Path != "" { + replaceMap["outputs.parameters."+param.Name+".path"] = param.ValueFrom.Path + } + } + fstTmpl = fasttemplate.New(globalReplacedTmplStr, "{{", "}}") s, err := Replace(fstTmpl, replaceMap, true) if err != nil { @@ -243,10 +389,12 @@ func RunCommand(name string, arg ...string) error { log.Info(cmdStr) _, err := cmd.Output() if err != nil { - exErr := err.(*exec.ExitError) - errOutput := string(exErr.Stderr) - log.Errorf("`%s` failed: %s", cmdStr, errOutput) - return errors.InternalError(strings.TrimSpace(errOutput)) + if exErr, ok := err.(*exec.ExitError); ok { + errOutput := string(exErr.Stderr) + log.Errorf("`%s` failed: %s", cmdStr, errOutput) + return errors.InternalError(strings.TrimSpace(errOutput)) + } + return errors.InternalWrapError(err) } return nil } diff --git a/workflow/controller/config.go b/workflow/controller/config.go index 52ff479a66a6..123a51967abe 100644 --- a/workflow/controller/config.go +++ b/workflow/controller/config.go @@ -3,6 +3,7 @@ package controller import ( "context" "fmt" + "io/ioutil" apiv1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" @@ -11,10 +12,10 @@ import ( "k8s.io/apimachinery/pkg/watch" "k8s.io/client-go/tools/cache" - "github.com/argoproj/argo/errors" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/workflow/common" - "github.com/argoproj/argo/workflow/metrics" + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/common" + "github.com/cyrusbiotechnology/argo/workflow/metrics" "github.com/ghodss/yaml" log "github.com/sirupsen/logrus" ) @@ -22,14 +23,23 @@ import ( // WorkflowControllerConfig contain the configuration settings for the workflow controller type WorkflowControllerConfig struct { // ExecutorImage is the image name of the executor to use when running pods + // DEPRECATED: use --executor-image flag to workflow-controller instead ExecutorImage string `json:"executorImage,omitempty"` // ExecutorImagePullPolicy is the imagePullPolicy of the executor to use when running pods + // DEPRECATED: use `executor.imagePullPolicy` in configmap instead ExecutorImagePullPolicy string `json:"executorImagePullPolicy,omitempty"` + // Executor holds container customizations for the executor to use when running pods + Executor *apiv1.Container `json:"executor,omitempty"` + // ExecutorResources specifies the resource requirements that will be used for the executor sidecar + // DEPRECATED: use `executor.resources` in configmap instead ExecutorResources *apiv1.ResourceRequirements `json:"executorResources,omitempty"` + // KubeConfig specifies a kube config file for the wait & init containers + KubeConfig *KubeConfig `json:"kubeConfig,omitempty"` + // ContainerRuntimeExecutor specifies the container runtime interface to use, default is docker ContainerRuntimeExecutor string `json:"containerRuntimeExecutor,omitempty"` @@ -62,6 +72,21 @@ type WorkflowControllerConfig struct { Parallelism int `json:"parallelism,omitempty"` } +// KubeConfig is used for wait & init sidecar containers to communicate with a k8s apiserver by a outofcluster method, +// it is used when the workflow controller is in a different cluster with the workflow workloads +type KubeConfig struct { + // SecretName of the kubeconfig secret + // may not be empty if kuebConfig specified + SecretName string `json:"secretName"` + // SecretKey of the kubeconfig in the secret + // may not be empty if kubeConfig specified + SecretKey string `json:"secretKey"` + // VolumeName of kubeconfig, default to 'kubeconfig' + VolumeName string `json:"volumeName,omitempty"` + // MountPath of the kubeconfig secret, default to '/kube/config' + MountPath string `json:"mountPath,omitempty"` +} + // ArtifactRepository represents a artifact repository in which a controller will store its artifacts type ArtifactRepository struct { // ArchiveLogs enables log archiving @@ -70,6 +95,9 @@ type ArtifactRepository struct { S3 *S3ArtifactRepository `json:"s3,omitempty"` // Artifactory stores artifacts to JFrog Artifactory Artifactory *ArtifactoryArtifactRepository `json:"artifactory,omitempty"` + // HDFS stores artifacts in HDFS + HDFS *HDFSArtifactRepository `json:"hdfs,omitempty"` + GCS *GCSArtifactRepository `json:"gcs,omitempty"` } // S3ArtifactRepository defines the controller configuration for an S3 artifact repository @@ -91,28 +119,66 @@ type ArtifactoryArtifactRepository struct { RepoURL string `json:"repoURL,omitempty"` } -// ResyncConfig reloads the controller config from the configmap +// HDFSArtifactRepository defines the controller configuration for an HDFS artifact repository +type HDFSArtifactRepository struct { + wfv1.HDFSConfig `json:",inline"` + + // PathFormat is defines the format of path to store a file. Can reference workflow variables + PathFormat string `json:"pathFormat,omitempty"` + + // Force copies a file forcibly even if it exists (default: false) + Force bool `json:"force,omitempty"` +} + +// GCSArtifactRepository defines the controller configuration for a GCS artifact repository +type GCSArtifactRepository struct { + wfv1.GCSBucket `json:",inline"` +} + +// ResyncConfig reloads the controller config from the configmap or configFile func (wfc *WorkflowController) ResyncConfig() error { - cmClient := wfc.kubeclientset.CoreV1().ConfigMaps(wfc.namespace) - cm, err := cmClient.Get(wfc.configMap, metav1.GetOptions{}) - if err != nil { - return errors.InternalWrapError(err) + + if wfc.configFile != "" { + log.Infof("Loading configfile from %s", wfc.configFile) + return wfc.updateConfigFromFile(wfc.configFile) + } else { + cmClient := wfc.kubeclientset.CoreV1().ConfigMaps(wfc.namespace) + cm, err := cmClient.Get(wfc.configMap, metav1.GetOptions{}) + if err != nil { + return errors.InternalWrapError(err) + } + return wfc.updateConfigFromConfigMap(cm) } - return wfc.updateConfig(cm) } -func (wfc *WorkflowController) updateConfig(cm *apiv1.ConfigMap) error { - configStr, ok := cm.Data[common.WorkflowControllerConfigMapKey] +func (wfc *WorkflowController) updateConfigFromConfigMap(cm *apiv1.ConfigMap) error { + configString, ok := cm.Data[common.WorkflowControllerConfigMapKey] if !ok { log.Warnf("ConfigMap '%s' does not have key '%s'", wfc.configMap, common.WorkflowControllerConfigMapKey) return nil } + + return wfc.updateConfig(configString) +} + +func (wfc *WorkflowController) updateConfigFromFile(filePath string) error { + fileData, err := ioutil.ReadFile(filePath) + if err != nil { + log.Errorf("Error reading config file %s", filePath) + return err + } + return wfc.updateConfig(string(fileData)) + +} + +func (wfc *WorkflowController) updateConfig(configString string) error { + var config WorkflowControllerConfig - err := yaml.Unmarshal([]byte(configStr), &config) + err := yaml.Unmarshal([]byte(configString), &config) if err != nil { return errors.InternalWrapError(err) } - log.Printf("workflow controller configuration from %s:\n%s", wfc.configMap, configStr) + log.Printf("workflow controller configuration from %s:\n%s", wfc.configMap, configString) if wfc.cliExecutorImage == "" && config.ExecutorImage == "" { return errors.Errorf(errors.CodeBadRequest, "ConfigMap '%s' does not have executorImage", wfc.configMap) } @@ -131,13 +197,13 @@ func (wfc *WorkflowController) executorImage() string { // executorImagePullPolicy returns the imagePullPolicy to use for the workflow executor func (wfc *WorkflowController) executorImagePullPolicy() apiv1.PullPolicy { - var policy string if wfc.cliExecutorImagePullPolicy != "" { - policy = wfc.cliExecutorImagePullPolicy + return apiv1.PullPolicy(wfc.cliExecutorImagePullPolicy) + } else if wfc.Config.Executor != nil && wfc.Config.Executor.ImagePullPolicy != "" { + return wfc.Config.Executor.ImagePullPolicy } else { - policy = wfc.Config.ExecutorImagePullPolicy + return apiv1.PullPolicy(wfc.Config.ExecutorImagePullPolicy) } - return apiv1.PullPolicy(policy) } func (wfc *WorkflowController) watchControllerConfigMap(ctx context.Context) (cache.Controller, error) { @@ -150,7 +216,7 @@ func (wfc *WorkflowController) watchControllerConfigMap(ctx context.Context) (ca AddFunc: func(obj interface{}) { if cm, ok := obj.(*apiv1.ConfigMap); ok { log.Infof("Detected ConfigMap update. Updating the controller config.") - err := wfc.updateConfig(cm) + err := wfc.updateConfigFromConfigMap(cm) if err != nil { log.Errorf("Update of config failed due to: %v", err) } @@ -164,7 +230,7 @@ func (wfc *WorkflowController) watchControllerConfigMap(ctx context.Context) (ca } if newCm, ok := new.(*apiv1.ConfigMap); ok { log.Infof("Detected ConfigMap update. Updating the controller config.") - err := wfc.updateConfig(newCm) + err := wfc.updateConfigFromConfigMap(newCm) if err != nil { log.Errorf("Update of config failed due to: %v", err) } diff --git a/workflow/controller/controller.go b/workflow/controller/controller.go index c58c8d3117b1..1cffd9563085 100644 --- a/workflow/controller/controller.go +++ b/workflow/controller/controller.go @@ -22,12 +22,12 @@ import ( "k8s.io/client-go/tools/cache" "k8s.io/client-go/util/workqueue" - "github.com/argoproj/argo" - wfclientset "github.com/argoproj/argo/pkg/client/clientset/versioned" - "github.com/argoproj/argo/workflow/common" - "github.com/argoproj/argo/workflow/metrics" - "github.com/argoproj/argo/workflow/ttlcontroller" - "github.com/argoproj/argo/workflow/util" + "github.com/cyrusbiotechnology/argo" + wfclientset "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned" + "github.com/cyrusbiotechnology/argo/workflow/common" + "github.com/cyrusbiotechnology/argo/workflow/metrics" + "github.com/cyrusbiotechnology/argo/workflow/ttlcontroller" + "github.com/cyrusbiotechnology/argo/workflow/util" ) // WorkflowController is the controller for workflow resources @@ -36,6 +36,8 @@ type WorkflowController struct { namespace string // configMap is the name of the config map in which to derive configuration of the controller from configMap string + // configFile is the path to a configuration file + configFile string // Config is the workflow controller's configuration Config WorkflowControllerConfig @@ -74,12 +76,15 @@ func NewWorkflowController( executorImage, executorImagePullPolicy, configMap string, + configFile string, ) *WorkflowController { + wfc := WorkflowController{ restConfig: restConfig, kubeclientset: kubeclientset, wfclientset: wfclientset, configMap: configMap, + configFile: configFile, namespace: namespace, cliExecutorImage: executorImage, cliExecutorImagePullPolicy: executorImagePullPolicy, @@ -130,11 +135,16 @@ func (wfc *WorkflowController) Run(ctx context.Context, wfWorkers, podWorkers in log.Infof("Workflow Controller (version: %s) starting", argo.GetVersion()) log.Infof("Workers: workflow: %d, pod: %d", wfWorkers, podWorkers) - log.Info("Watch Workflow controller config map updates") - _, err := wfc.watchControllerConfigMap(ctx) - if err != nil { - log.Errorf("Failed to register watch for controller config map: %v", err) - return + + if wfc.configFile != "" { + log.Info("A config file was specified. Ignoring the k8s configmap resource") + } else { + log.Info("Watch Workflow controller config map updates") + _, err := wfc.watchControllerConfigMap(ctx) + if err != nil { + log.Errorf("Failed to register watch for controller config map: %v", err) + return + } } wfc.wfInformer = util.NewWorkflowInformer(wfc.restConfig, wfc.Config.Namespace, workflowResyncPeriod, wfc.tweakWorkflowlist) @@ -243,6 +253,16 @@ func (wfc *WorkflowController) processNextItem() bool { } woc := newWorkflowOperationCtx(wf, wfc) + + // Decompress the node if it is compressed + err = util.DecompressWorkflow(woc.wf) + if err != nil { + woc.log.Warnf("workflow decompression failed: %v", err) + woc.markWorkflowFailed(fmt.Sprintf("workflow decompression failed: %s", err.Error())) + woc.persistUpdates() + wfc.throttler.Remove(key) + return true + } woc.operate() if woc.wf.Status.Completed() { wfc.throttler.Remove(key) diff --git a/workflow/controller/controller_test.go b/workflow/controller/controller_test.go index 4144fb10f9f2..0ab3e28d8941 100644 --- a/workflow/controller/controller_test.go +++ b/workflow/controller/controller_test.go @@ -7,8 +7,8 @@ import ( "io/ioutil" "testing" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - fakewfclientset "github.com/argoproj/argo/pkg/client/clientset/versioned/fake" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + fakewfclientset "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned/fake" "github.com/ghodss/yaml" "github.com/stretchr/testify/assert" apiv1 "k8s.io/api/core/v1" diff --git a/workflow/controller/dag.go b/workflow/controller/dag.go index dcd9c263209a..99ef6e10d977 100644 --- a/workflow/controller/dag.go +++ b/workflow/controller/dag.go @@ -5,9 +5,9 @@ import ( "fmt" "strings" - "github.com/argoproj/argo/errors" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/workflow/common" + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/common" "github.com/valyala/fasttemplate" ) @@ -79,7 +79,9 @@ func (d *dagContext) assessDAGPhase(targetTasks []string, nodes map[string]wfv1. unsuccessfulPhase = node.Phase } if node.Type == wfv1.NodeTypeRetry { - if hasMoreRetries(&node, d.wf) { + if node.Successful() { + retriesExhausted = false + } else if hasMoreRetries(&node, d.wf) { retriesExhausted = false } } @@ -106,6 +108,10 @@ func (d *dagContext) assessDAGPhase(targetTasks []string, nodes map[string]wfv1. } func hasMoreRetries(node *wfv1.NodeStatus, wf *wfv1.Workflow) bool { + if node.Phase == wfv1.NodeSucceeded { + return false + } + if len(node.Children) == 0 { return true } @@ -126,7 +132,7 @@ func (woc *wfOperationCtx) executeDAG(nodeName string, tmpl *wfv1.Template, boun } defer func() { if node != nil && woc.wf.Status.Nodes[node.ID].Completed() { - _ = woc.killDeamonedChildren(node.ID) + _ = woc.killDaemonedChildren(node.ID) } }() @@ -227,7 +233,7 @@ func (woc *wfOperationCtx) executeDAGTask(dagCtx *dagContext, taskName string) { depNode := dagCtx.getTaskNode(depName) if depNode != nil { if depNode.Completed() { - if !depNode.Successful() { + if !depNode.Successful() && !dagCtx.getTask(depName).ContinuesOn(depNode.Phase) { dependenciesSuccessful = false } continue @@ -251,12 +257,21 @@ func (woc *wfOperationCtx) executeDAGTask(dagCtx *dagContext, taskName string) { // All our dependencies were satisfied and successful. It's our turn to run + taskGroupNode := woc.getNodeByName(nodeName) + if taskGroupNode != nil && taskGroupNode.Type != wfv1.NodeTypeTaskGroup { + taskGroupNode = nil + } // connectDependencies is a helper to connect our dependencies to current task as children connectDependencies := func(taskNodeName string) { - if len(task.Dependencies) == 0 { + if len(task.Dependencies) == 0 || taskGroupNode != nil { // if we had no dependencies, then we are a root task, and we should connect the // boundary node as our parent - woc.addChildNode(dagCtx.boundaryName, taskNodeName) + if taskGroupNode == nil { + woc.addChildNode(dagCtx.boundaryName, taskNodeName) + } else { + woc.addChildNode(taskGroupNode.Name, taskNodeName) + } + } else { // Otherwise, add all outbound nodes of our dependencies as parents to this node for _, depName := range task.Dependencies { @@ -287,6 +302,16 @@ func (woc *wfOperationCtx) executeDAGTask(dagCtx *dagContext, taskName string) { return } + // If DAG task has withParam of with withSequence then we need to create virtual node of type TaskGroup. + // For example, if we had task A with withItems of ['foo', 'bar'] which expanded to ['A(0:foo)', 'A(1:bar)'], we still + // need to create a node for A. + if len(task.WithItems) > 0 || task.WithParam != "" || task.WithSequence != nil { + if taskGroupNode == nil { + connectDependencies(nodeName) + taskGroupNode = woc.initializeNode(nodeName, wfv1.NodeTypeTaskGroup, task.Template, dagCtx.boundaryID, wfv1.NodeRunning, "") + } + } + for _, t := range expandedTasks { node = dagCtx.getTaskNode(t.Name) taskNodeName := dagCtx.taskNodeName(t.Name) @@ -311,12 +336,8 @@ func (woc *wfOperationCtx) executeDAGTask(dagCtx *dagContext, taskName string) { _, _ = woc.executeTemplate(t.Template, t.Arguments, taskNodeName, dagCtx.boundaryID) } - // If we expanded the task, we still need to create the task entry for the non-expanded node, - // since dependant tasks will look to it, when deciding when to execute. For example, if we had - // task A with withItems of ['foo', 'bar'] which expanded to ['A(0:foo)', 'A(1:bar)'], we still - // need to create a node for A, after the withItems have completed. - if len(task.WithItems) > 0 || task.WithParam != "" || task.WithSequence != nil { - nodeStatus := wfv1.NodeSucceeded + if taskGroupNode != nil { + groupPhase := wfv1.NodeSucceeded for _, t := range expandedTasks { // Add the child relationship from our dependency's outbound nodes to this node. node := dagCtx.getTaskNode(t.Name) @@ -324,17 +345,10 @@ func (woc *wfOperationCtx) executeDAGTask(dagCtx *dagContext, taskName string) { return } if !node.Successful() { - nodeStatus = node.Phase + groupPhase = node.Phase } } - woc.initializeNode(nodeName, wfv1.NodeTypeTaskGroup, task.Template, dagCtx.boundaryID, nodeStatus, "") - if len(expandedTasks) > 0 { - for _, t := range expandedTasks { - woc.addChildNode(dagCtx.taskNodeName(t.Name), nodeName) - } - } else { - connectDependencies(nodeName) - } + woc.markNodePhase(taskGroupNode.Name, groupPhase) } } @@ -366,6 +380,13 @@ func (woc *wfOperationCtx) resolveDependencyReferences(dagCtx *dagContext, task } // Perform replacement + // Replace woc.volumes + err := woc.substituteParamsInVolumes(scope.replaceMap()) + if err != nil { + return nil, err + } + + // Replace task's parameters taskBytes, err := json.Marshal(task) if err != nil { return nil, errors.InternalWrapError(err) diff --git a/workflow/controller/dag_test.go b/workflow/controller/dag_test.go index 3bb18102e8f7..677244629d60 100644 --- a/workflow/controller/dag_test.go +++ b/workflow/controller/dag_test.go @@ -5,8 +5,8 @@ import ( "github.com/stretchr/testify/assert" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/test" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/test" ) // TestDagXfail verifies a DAG can fail properly diff --git a/workflow/controller/exec_control.go b/workflow/controller/exec_control.go index 3f332d36f485..9991f22882d8 100644 --- a/workflow/controller/exec_control.go +++ b/workflow/controller/exec_control.go @@ -3,19 +3,23 @@ package controller import ( "encoding/json" "fmt" + "sync" "time" apiv1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "github.com/argoproj/argo/errors" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/workflow/common" + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/common" ) // applyExecutionControl will ensure a pod's execution control annotation is up-to-date // kills any pending pods when workflow has reached it's deadline -func (woc *wfOperationCtx) applyExecutionControl(pod *apiv1.Pod) error { +func (woc *wfOperationCtx) applyExecutionControl(pod *apiv1.Pod, wfNodesLock *sync.RWMutex) error { + if pod == nil { + return nil + } switch pod.Status.Phase { case apiv1.PodSucceeded, apiv1.PodFailed: // Skip any pod which are already completed @@ -27,6 +31,8 @@ func (woc *wfOperationCtx) applyExecutionControl(pod *apiv1.Pod) error { woc.log.Infof("Deleting Pending pod %s/%s which has exceeded workflow deadline %s", pod.Namespace, pod.Name, woc.workflowDeadline) err := woc.controller.kubeclientset.CoreV1().Pods(pod.Namespace).Delete(pod.Name, &metav1.DeleteOptions{}) if err == nil { + wfNodesLock.Lock() + defer wfNodesLock.Unlock() node := woc.wf.Status.Nodes[pod.Name] var message string if woc.workflowDeadline.IsZero() { @@ -60,13 +66,19 @@ func (woc *wfOperationCtx) applyExecutionControl(pod *apiv1.Pod) error { return nil } } + if podExecCtl.Deadline != nil && podExecCtl.Deadline.IsZero() { + // If the pod has already been explicitly signaled to terminate, then do nothing. + // This can happen when daemon steps are terminated. + woc.log.Infof("Skipping sync of execution control of pod %s. pod has been signaled to terminate", pod.Name) + return nil + } woc.log.Infof("Execution control for pod %s out-of-sync desired: %v, actual: %v", pod.Name, desiredExecCtl.Deadline, podExecCtl.Deadline) return woc.updateExecutionControl(pod.Name, desiredExecCtl) } -// killDeamonedChildren kill any daemoned pods of a steps or DAG template node. -func (woc *wfOperationCtx) killDeamonedChildren(nodeID string) error { - woc.log.Infof("Checking deamoned children of %s", nodeID) +// killDaemonedChildren kill any daemoned pods of a steps or DAG template node. +func (woc *wfOperationCtx) killDaemonedChildren(nodeID string) error { + woc.log.Infof("Checking daemoned children of %s", nodeID) var firstErr error execCtl := common.ExecutionControl{ Deadline: &time.Time{}, @@ -116,7 +128,7 @@ func (woc *wfOperationCtx) updateExecutionControl(podName string, execCtl common woc.log.Infof("Signalling %s of updates", podName) exec, err := common.ExecPodContainer( woc.controller.restConfig, woc.wf.ObjectMeta.Namespace, podName, - common.WaitContainerName, true, true, "sh", "-c", "kill -s USR2 1", + common.WaitContainerName, true, true, "sh", "-c", "kill -s USR2 $(pidof argoexec)", ) if err != nil { return err diff --git a/workflow/controller/operator.go b/workflow/controller/operator.go index ed03abc2f000..e5dd72663b75 100644 --- a/workflow/controller/operator.go +++ b/workflow/controller/operator.go @@ -9,6 +9,7 @@ import ( "sort" "strconv" "strings" + "sync" "time" argokubeerr "github.com/argoproj/pkg/kube/errors" @@ -20,14 +21,16 @@ import ( apierr "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/tools/cache" - - "github.com/argoproj/argo/errors" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" - "github.com/argoproj/argo/util/retry" - "github.com/argoproj/argo/workflow/common" - "github.com/argoproj/argo/workflow/util" - "github.com/argoproj/argo/workflow/validate" + "k8s.io/utils/pointer" + + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/util/file" + "github.com/cyrusbiotechnology/argo/util/retry" + "github.com/cyrusbiotechnology/argo/workflow/common" + "github.com/cyrusbiotechnology/argo/workflow/util" + "github.com/cyrusbiotechnology/argo/workflow/validate" ) // wfOperationCtx is the context for evaluation and operation of a single workflow @@ -46,6 +49,9 @@ type wfOperationCtx struct { // globalParams holds any parameters that are available to be referenced // in the global scope (e.g. workflow.parameters.XXX). globalParams map[string]string + // volumes holds a DeepCopy of wf.Spec.Volumes to perform substitutions. + // It is then used in addVolumeReferences() when creating a pod. + volumes []apiv1.Volume // map of pods which need to be labeled with completed=true completedPods map[string]bool // deadline is the dealine time in which this operation should relinquish @@ -72,6 +78,9 @@ var ( // for before requeuing the workflow onto the workqueue. const maxOperationTime time.Duration = 10 * time.Second +//maxWorkflowSize is the maximum size for workflow.yaml +const maxWorkflowSize int = 1024 * 1024 + // newWorkflowOperationCtx creates and initializes a new wfOperationCtx object. func newWorkflowOperationCtx(wf *wfv1.Workflow, wfc *WorkflowController) *wfOperationCtx { // NEVER modify objects from the store. It's a read-only, local cache. @@ -87,6 +96,7 @@ func newWorkflowOperationCtx(wf *wfv1.Workflow, wfc *WorkflowController) *wfOper }), controller: wfc, globalParams: make(map[string]string), + volumes: wf.Spec.DeepCopy().Volumes, completedPods: make(map[string]bool), deadline: time.Now().UTC().Add(maxOperationTime), } @@ -103,7 +113,12 @@ func newWorkflowOperationCtx(wf *wfv1.Workflow, wfc *WorkflowController) *wfOper // TODO: an error returned by this method should result in requeuing the workflow to be retried at a // later time func (woc *wfOperationCtx) operate() { - defer woc.persistUpdates() + defer func() { + if woc.wf.Status.Completed() { + _ = woc.killDaemonedChildren("") + } + woc.persistUpdates() + }() defer func() { if r := recover(); r != nil { if rerr, ok := r.(error); ok { @@ -114,11 +129,14 @@ func (woc *wfOperationCtx) operate() { woc.log.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack()) } }() + woc.log.Infof("Processing workflow") + // Perform one-time workflow validation if woc.wf.Status.Phase == "" { woc.markWorkflowRunning() - err := validate.ValidateWorkflow(woc.wf) + validateOpts := validate.ValidateOpts{ContainerRuntimeExecutor: woc.controller.Config.ContainerRuntimeExecutor} + err := validate.ValidateWorkflow(woc.wf, validateOpts) if err != nil { woc.markWorkflowFailed(fmt.Sprintf("invalid spec: %s", err.Error())) return @@ -144,7 +162,14 @@ func (woc *wfOperationCtx) operate() { woc.setGlobalParameters() - err := woc.createPVCs() + err := woc.substituteParamsInVolumes(woc.globalParams) + if err != nil { + woc.log.Errorf("%s volumes global param substitution error: %+v", woc.wf.ObjectMeta.Name, err) + woc.markWorkflowError(err, true) + return + } + + err = woc.createPVCs() if err != nil { woc.log.Errorf("%s pvc create error: %+v", woc.wf.ObjectMeta.Name, err) woc.markWorkflowError(err, true) @@ -245,6 +270,12 @@ func (woc *wfOperationCtx) setGlobalParameters() { for _, param := range woc.wf.Spec.Arguments.Parameters { woc.globalParams["workflow.parameters."+param.Name] = *param.Value } + for k, v := range woc.wf.ObjectMeta.Annotations { + woc.globalParams["workflow.annotations."+k] = v + } + for k, v := range woc.wf.ObjectMeta.Labels { + woc.globalParams["workflow.labels."+k] = v + } if woc.wf.Status.Outputs != nil { for _, param := range woc.wf.Status.Outputs.Parameters { woc.globalParams["workflow.outputs.parameters."+param.Name] = *param.Value @@ -270,9 +301,18 @@ func (woc *wfOperationCtx) persistUpdates() { return } wfClient := woc.controller.wfclientset.ArgoprojV1alpha1().Workflows(woc.wf.ObjectMeta.Namespace) - _, err := wfClient.Update(woc.wf) + err := woc.checkAndCompress() if err != nil { - woc.log.Warnf("Error updating workflow: %v", err) + woc.log.Warnf("Error compressing workflow: %v", err) + woc.markWorkflowFailed(err.Error()) + } + if woc.wf.Status.CompressedNodes != "" { + woc.wf.Status.Nodes = nil + } + + _, err = wfClient.Update(woc.wf) + if err != nil { + woc.log.Warnf("Error updating workflow: %v %s", err, apierr.ReasonForError(err)) if argokubeerr.IsRequestEntityTooLargeErr(err) { woc.persistWorkflowSizeLimitErr(wfClient, err) return @@ -305,7 +345,7 @@ func (woc *wfOperationCtx) persistUpdates() { } // persistWorkflowSizeLimitErr will fail a the workflow with an error when we hit the resource size limit -// See https://github.com/argoproj/argo/issues/913 +// See https://github.com/cyrusbiotechnology/argo/issues/913 func (woc *wfOperationCtx) persistWorkflowSizeLimitErr(wfClient v1alpha1.WorkflowInterface, err error) { woc.wf = woc.orig.DeepCopy() woc.markWorkflowError(err, true) @@ -416,6 +456,42 @@ func (woc *wfOperationCtx) processNodeRetries(node *wfv1.NodeStatus, retryStrate return nil } +func (woc *wfOperationCtx) collectConditionResults(pod *apiv1.Pod, currentResults *[]wfv1.ExceptionResult, annotationKey string) error { + + if resultString, ok := pod.Annotations[annotationKey]; ok { + + uniqueConditionNames := make(map[string]bool) + for _, result := range *currentResults { + uniqueConditionNames[result.Name] = true + } + + var newResults []wfv1.ExceptionResult + err := json.Unmarshal([]byte(resultString), &newResults) + if err != nil { + return err + } + + // Only add the new result to the list if we don't already have an error result with that name + for _, newResult := range newResults { + if _, ok := uniqueConditionNames[newResult.Name]; !ok { + *currentResults = append(*currentResults, newResult) + } + } + } + return nil +} + +func (woc *wfOperationCtx) collectPodErrorsAndWarnings(pod *apiv1.Pod) error { + + err := woc.collectConditionResults(pod, &woc.wf.Status.Errors, common.AnnotationKeyErrors) + if err != nil { + return err + } + + err = woc.collectConditionResults(pod, &woc.wf.Status.Warnings, common.AnnotationKeyWarnings) + return err +} + // podReconciliation is the process by which a workflow will examine all its related // pods and update the node state before continuing the evaluation of the workflow. // Records all pods which were observed completed, which will be labeled completed=true @@ -426,31 +502,66 @@ func (woc *wfOperationCtx) podReconciliation() error { return err } seenPods := make(map[string]bool) + seenPodLock := &sync.Mutex{} + wfNodesLock := &sync.RWMutex{} - performAssessment := func(pod *apiv1.Pod) { + performAssessment := func(pod *apiv1.Pod) error { + if pod == nil { + return nil + } nodeNameForPod := pod.Annotations[common.AnnotationKeyNodeName] nodeID := woc.wf.NodeID(nodeNameForPod) + seenPodLock.Lock() seenPods[nodeID] = true + seenPodLock.Unlock() + + wfNodesLock.Lock() + defer wfNodesLock.Unlock() if node, ok := woc.wf.Status.Nodes[nodeID]; ok { if newState := assessNodeStatus(pod, &node); newState != nil { woc.wf.Status.Nodes[nodeID] = *newState woc.addOutputsToScope("workflow", node.Outputs, nil) woc.updated = true } - if woc.wf.Status.Nodes[pod.ObjectMeta.Name].Completed() { + node := woc.wf.Status.Nodes[pod.ObjectMeta.Name] + if node.Completed() && !node.IsDaemoned() { + if tmpVal, tmpOk := pod.Labels[common.LabelKeyCompleted]; tmpOk { + if tmpVal == "true" { + return nil + } + } woc.completedPods[pod.ObjectMeta.Name] = true + err := woc.collectPodErrorsAndWarnings(pod) + if err != nil { + return err + } } } + return nil } + parallelPodNum := make(chan string, 500) + var wg sync.WaitGroup + for _, pod := range podList.Items { - performAssessment(&pod) - err = woc.applyExecutionControl(&pod) - if err != nil { - woc.log.Warnf("Failed to apply execution control to pod %s", pod.Name) - } + parallelPodNum <- pod.Name + wg.Add(1) + go func(tmpPod apiv1.Pod) { + defer wg.Done() + err = performAssessment(&tmpPod) + if err != nil { + woc.log.Errorf("Failed to collect extended errors and warnings from pod %s: %s", pod.Name, err.Error()) + } + err = woc.applyExecutionControl(&tmpPod, wfNodesLock) + if err != nil { + woc.log.Warnf("Failed to apply execution control to pod %s", tmpPod.Name) + } + <-parallelPodNum + }(pod) } + wg.Wait() + // Now check for deleted pods. Iterate our nodes. If any one of our nodes does not show up in // the seen list it implies that the pod was deleted without the controller seeing the event. // It is now impossible to infer pod status. The only thing we can do at this point is to mark @@ -541,18 +652,24 @@ func assessNodeStatus(pod *apiv1.Pod, node *wfv1.NodeStatus) *wfv1.NodeStatus { var newDaemonStatus *bool var message string updated := false - f := false switch pod.Status.Phase { case apiv1.PodPending: newPhase = wfv1.NodePending - newDaemonStatus = &f + newDaemonStatus = pointer.BoolPtr(false) message = getPendingReason(pod) case apiv1.PodSucceeded: newPhase = wfv1.NodeSucceeded - newDaemonStatus = &f + // A pod can exit with a successful status and still fail an exception condition check + newPhase, message = handlePodFailures(pod) + newDaemonStatus = pointer.BoolPtr(false) case apiv1.PodFailed: - newPhase, message = inferFailedReason(pod) - newDaemonStatus = &f + // ignore pod failure for daemoned steps + if node.IsDaemoned() { + newPhase = wfv1.NodeSucceeded + } else { + newPhase, message = handlePodFailures(pod) + } + newDaemonStatus = pointer.BoolPtr(false) case apiv1.PodRunning: newPhase = wfv1.NodeRunning tmplStr, ok := pod.Annotations[common.AnnotationKeyTemplate] @@ -573,10 +690,9 @@ func assessNodeStatus(pod *apiv1.Pod, node *wfv1.NodeStatus) *wfv1.NodeStatus { return nil } } - // proceed to mark node status as succeeded (and daemoned) - newPhase = wfv1.NodeSucceeded - t := true - newDaemonStatus = &t + // proceed to mark node status as running (and daemoned) + newPhase = wfv1.NodeRunning + newDaemonStatus = pointer.BoolPtr(true) log.Infof("Processing ready daemon pod: %v", pod.ObjectMeta.SelfLink) } default: @@ -691,9 +807,9 @@ func getPendingReason(pod *apiv1.Pod) string { return "" } -// inferFailedReason returns metadata about a Failed pod to be used in its NodeStatus +// handlePodFailures returns metadata about a Failed pod to be used in its NodeStatus // Returns a tuple of the new phase and message -func inferFailedReason(pod *apiv1.Pod) (wfv1.NodePhase, string) { +func handlePodFailures(pod *apiv1.Pod) (wfv1.NodePhase, string) { if pod.Status.Message != "" { // Pod has a nice error message. Use that. return wfv1.NodeFailed, pod.Status.Message @@ -789,6 +905,27 @@ func inferFailedReason(pod *apiv1.Pod) (wfv1.NodePhase, string) { for _, failMsg := range failMessages { return wfv1.NodeFailed, failMsg } + + // If we get here, check the extended failure conditions and mark the node as failed if any exist + + if resultString, ok := pod.Annotations[common.AnnotationKeyErrors]; ok { + var errorResults []wfv1.ExceptionResult + err := json.Unmarshal([]byte(resultString), &errorResults) + + if err != nil { + failMsg := fmt.Sprintf("Failed to deserialize Extended error descriptions: %s", err.Error()) + return wfv1.NodeFailed, failMsg + } + + if len(errorResults) > 0 { + failMsg := "failed for the following reasons: " + for _, result := range errorResults { + failMsg += fmt.Sprintf("%s - %s ", result.Name, result.Message) + } + return wfv1.NodeFailed, failMsg + } + } + // If we get here, we have detected that the main/wait containers succeed but the sidecar(s) // were SIGKILL'd. The executor may have had to forcefully terminate the sidecar (kill -9), // resulting in a 137 exit code (which we had ignored earlier). If failMessages is empty, it @@ -1020,7 +1157,8 @@ func (woc *wfOperationCtx) markWorkflowPhase(phase wfv1.NodePhase, markCompleted switch phase { case wfv1.NodeSucceeded, wfv1.NodeFailed, wfv1.NodeError: - if markCompleted { + // wait for all daemon nodes to get terminated before marking workflow completed + if markCompleted && !woc.hasDaemonNodes() { woc.log.Infof("Marking workflow completed") woc.wf.Status.FinishedAt = metav1.Time{Time: time.Now().UTC()} if woc.wf.ObjectMeta.Labels == nil { @@ -1032,6 +1170,15 @@ func (woc *wfOperationCtx) markWorkflowPhase(phase wfv1.NodePhase, markCompleted } } +func (woc *wfOperationCtx) hasDaemonNodes() bool { + for _, node := range woc.wf.Status.Nodes { + if node.IsDaemoned() { + return true + } + } + return false +} + func (woc *wfOperationCtx) markWorkflowRunning() { woc.markWorkflowPhase(wfv1.NodeRunning, false) } @@ -1117,6 +1264,14 @@ func (woc *wfOperationCtx) markNodePhase(nodeName string, phase wfv1.NodePhase, return node } +// markNodeErrorClearOuput is a convenience method to mark a node with an error and clear the output +func (woc *wfOperationCtx) markNodeErrorClearOuput(nodeName string, err error) *wfv1.NodeStatus { + nodeStatus := woc.markNodeError(nodeName, err) + nodeStatus.Outputs = nil + woc.wf.Status.Nodes[nodeStatus.ID] = *nodeStatus + return nodeStatus +} + // markNodeError is a convenience method to mark a node with an error and set the message from the error func (woc *wfOperationCtx) markNodeError(nodeName string, err error) *wfv1.NodeStatus { return woc.markNodePhase(nodeName, wfv1.NodeError, err.Error()) @@ -1175,8 +1330,17 @@ func (woc *wfOperationCtx) executeContainer(nodeName string, tmpl *wfv1.Template func (woc *wfOperationCtx) getOutboundNodes(nodeID string) []string { node := woc.wf.Status.Nodes[nodeID] switch node.Type { - case wfv1.NodeTypePod, wfv1.NodeTypeSkipped, wfv1.NodeTypeSuspend, wfv1.NodeTypeTaskGroup: + case wfv1.NodeTypePod, wfv1.NodeTypeSkipped, wfv1.NodeTypeSuspend: return []string{node.ID} + case wfv1.NodeTypeTaskGroup: + if len(node.Children) == 0 { + return []string{node.ID} + } + outboundNodes := make([]string, 0) + for _, child := range node.Children { + outboundNodes = append(outboundNodes, woc.getOutboundNodes(child)...) + } + return outboundNodes case wfv1.NodeTypeRetry: numChildren := len(node.Children) if numChildren > 0 { @@ -1276,7 +1440,7 @@ func (woc *wfOperationCtx) addOutputsToScope(prefix string, outputs *wfv1.Output if scope != nil { scope.addArtifactToScope(key, art) } - woc.addArtifactToGlobalScope(art) + woc.addArtifactToGlobalScope(art, scope) } } @@ -1383,7 +1547,7 @@ func (woc *wfOperationCtx) addParamToGlobalScope(param wfv1.Parameter) { // addArtifactToGlobalScope exports any desired node outputs to the global scope // Optionally adds to a local scope if supplied -func (woc *wfOperationCtx) addArtifactToGlobalScope(art wfv1.Artifact) { +func (woc *wfOperationCtx) addArtifactToGlobalScope(art wfv1.Artifact, scope *wfScope) { if art.GlobalName == "" { return } @@ -1397,6 +1561,9 @@ func (woc *wfOperationCtx) addArtifactToGlobalScope(art wfv1.Artifact) { art.Path = "" if !reflect.DeepEqual(woc.wf.Status.Outputs.Artifacts[i], art) { woc.wf.Status.Outputs.Artifacts[i] = art + if scope != nil { + scope.addArtifactToScope(globalArtName, art) + } woc.log.Infof("overwriting %s: %v", globalArtName, art) woc.updated = true } @@ -1412,6 +1579,9 @@ func (woc *wfOperationCtx) addArtifactToGlobalScope(art wfv1.Artifact) { art.Path = "" woc.log.Infof("setting %s: %v", globalArtName, art) woc.wf.Status.Outputs.Artifacts = append(woc.wf.Status.Outputs.Artifacts, art) + if scope != nil { + scope.addArtifactToScope(globalArtName, art) + } woc.updated = true } @@ -1441,16 +1611,12 @@ func (woc *wfOperationCtx) executeResource(nodeName string, tmpl *wfv1.Template, if node != nil { return node } - mainCtr := apiv1.Container{ - Image: woc.controller.executorImage(), - Command: []string{"argoexec"}, - Args: []string{"resource", tmpl.Resource.Action}, - VolumeMounts: []apiv1.VolumeMount{ - volumeMountPodMetadata, - }, - Env: execEnvVars, + mainCtr := woc.newExecContainer(common.MainContainerName) + mainCtr.Command = []string{"argoexec", "resource", tmpl.Resource.Action} + mainCtr.VolumeMounts = []apiv1.VolumeMount{ + volumeMountPodMetadata, } - _, err := woc.createWorkflowPod(nodeName, mainCtr, tmpl) + _, err := woc.createWorkflowPod(nodeName, *mainCtr, tmpl) if err != nil { return woc.initializeNode(nodeName, wfv1.NodeTypePod, tmpl.Name, boundaryID, wfv1.NodeError, err.Error()) } @@ -1539,3 +1705,66 @@ func expandSequence(seq *wfv1.Sequence) ([]wfv1.Item, error) { } return items, nil } + +// getSize return the entire workflow json string size +func (woc *wfOperationCtx) getSize() int { + nodeContent, err := json.Marshal(woc.wf) + if err != nil { + return -1 + } + + compressNodeSize := len(woc.wf.Status.CompressedNodes) + + if compressNodeSize > 0 { + nodeStatus, err := json.Marshal(woc.wf.Status.Nodes) + if err != nil { + return -1 + } + return len(nodeContent) - len(nodeStatus) + } + return len(nodeContent) +} + +// checkAndCompress will check the workflow size and compress node status if total workflow size is more than maxWorkflowSize. +// The compressed content will be assign to compressedNodes element and clear the nodestatus map. +func (woc *wfOperationCtx) checkAndCompress() error { + + if woc.wf.Status.CompressedNodes != "" || (woc.wf.Status.CompressedNodes == "" && woc.getSize() >= maxWorkflowSize) { + nodeContent, err := json.Marshal(woc.wf.Status.Nodes) + if err != nil { + return errors.InternalWrapError(err) + } + buff := string(nodeContent) + woc.wf.Status.CompressedNodes = file.CompressEncodeString(buff) + } + + if woc.wf.Status.CompressedNodes != "" && woc.getSize() >= maxWorkflowSize { + return errors.InternalError(fmt.Sprintf("Workflow is longer than maximum allowed size. Size=%d", woc.getSize())) + } + + return nil +} + +func (woc *wfOperationCtx) substituteParamsInVolumes(params map[string]string) error { + if woc.volumes == nil { + return nil + } + + volumes := woc.volumes + volumesBytes, err := json.Marshal(volumes) + if err != nil { + return errors.InternalWrapError(err) + } + fstTmpl := fasttemplate.New(string(volumesBytes), "{{", "}}") + newVolumesStr, err := common.Replace(fstTmpl, params, true) + if err != nil { + return err + } + var newVolumes []apiv1.Volume + err = json.Unmarshal([]byte(newVolumesStr), &newVolumes) + if err != nil { + return errors.InternalWrapError(err) + } + woc.volumes = newVolumes + return nil +} diff --git a/workflow/controller/operator_test.go b/workflow/controller/operator_test.go index e320535da3db..a5fc682a62fc 100644 --- a/workflow/controller/operator_test.go +++ b/workflow/controller/operator_test.go @@ -4,9 +4,9 @@ import ( "fmt" "testing" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/test" - "github.com/argoproj/argo/workflow/util" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/test" + "github.com/cyrusbiotechnology/argo/workflow/util" "github.com/stretchr/testify/assert" apiv1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/resource" @@ -225,11 +225,13 @@ func TestWorkflowParallelismLimit(t *testing.T) { assert.Equal(t, 2, len(pods.Items)) // operate again and make sure we don't schedule any more pods makePodsRunning(t, controller.kubeclientset, wf.ObjectMeta.Namespace) + assert.Equal(t, int64(2), woc.countActivePods()) wf, err = wfcset.Get(wf.ObjectMeta.Name, metav1.GetOptions{}) assert.Nil(t, err) // wfBytes, _ := json.MarshalIndent(wf, "", " ") // log.Printf("%s", wfBytes) woc = newWorkflowOperationCtx(wf, controller) + assert.Equal(t, int64(2), woc.countActivePods()) woc.operate() pods, err = controller.kubeclientset.CoreV1().Pods("").List(metav1.ListOptions{}) assert.Nil(t, err) @@ -435,14 +437,16 @@ func TestNestedTemplateParallelismLimit(t *testing.T) { // TestSidecarResourceLimits verifies resource limits on the sidecar can be set in the controller config func TestSidecarResourceLimits(t *testing.T) { controller := newController() - controller.Config.ExecutorResources = &apiv1.ResourceRequirements{ - Limits: apiv1.ResourceList{ - apiv1.ResourceCPU: resource.MustParse("0.5"), - apiv1.ResourceMemory: resource.MustParse("512Mi"), - }, - Requests: apiv1.ResourceList{ - apiv1.ResourceCPU: resource.MustParse("0.1"), - apiv1.ResourceMemory: resource.MustParse("64Mi"), + controller.Config.Executor = &apiv1.Container{ + Resources: apiv1.ResourceRequirements{ + Limits: apiv1.ResourceList{ + apiv1.ResourceCPU: resource.MustParse("0.5"), + apiv1.ResourceMemory: resource.MustParse("512Mi"), + }, + Requests: apiv1.ResourceList{ + apiv1.ResourceCPU: resource.MustParse("0.1"), + apiv1.ResourceMemory: resource.MustParse("64Mi"), + }, }, } wf := unmarshalWF(helloWorldWf) @@ -765,19 +769,19 @@ func TestAddGlobalArtifactToScope(t *testing.T) { }, } // Make sure if the artifact is not global, don't add to scope - woc.addArtifactToGlobalScope(art) + woc.addArtifactToGlobalScope(art, nil) assert.Nil(t, woc.wf.Status.Outputs) // Now mark it as global. Verify it is added to workflow outputs art.GlobalName = "global-art" - woc.addArtifactToGlobalScope(art) + woc.addArtifactToGlobalScope(art, nil) assert.Equal(t, 1, len(woc.wf.Status.Outputs.Artifacts)) assert.Equal(t, art.GlobalName, woc.wf.Status.Outputs.Artifacts[0].Name) assert.Equal(t, "some/key", woc.wf.Status.Outputs.Artifacts[0].S3.Key) // Change the value and verify update is reflected art.S3.Key = "new/key" - woc.addArtifactToGlobalScope(art) + woc.addArtifactToGlobalScope(art, nil) assert.Equal(t, 1, len(woc.wf.Status.Outputs.Artifacts)) assert.Equal(t, art.GlobalName, woc.wf.Status.Outputs.Artifacts[0].Name) assert.Equal(t, "new/key", woc.wf.Status.Outputs.Artifacts[0].S3.Key) @@ -785,7 +789,7 @@ func TestAddGlobalArtifactToScope(t *testing.T) { // Add a new global artifact art.GlobalName = "global-art2" art.S3.Key = "new/new/key" - woc.addArtifactToGlobalScope(art) + woc.addArtifactToGlobalScope(art, nil) assert.Equal(t, 2, len(woc.wf.Status.Outputs.Artifacts)) assert.Equal(t, art.GlobalName, woc.wf.Status.Outputs.Artifacts[1].Name) assert.Equal(t, "new/new/key", woc.wf.Status.Outputs.Artifacts[1].S3.Key) @@ -886,3 +890,118 @@ func TestExpandWithSequence(t *testing.T) { assert.Equal(t, "testuser01", items[0].Value.(string)) assert.Equal(t, "testuser0A", items[9].Value.(string)) } + +var metadataTemplate = ` +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + name: metadata-template + labels: + image: foo:bar + annotations: + k8s-webhook-handler.io/repo: "git@github.com:argoproj/argo.git" + k8s-webhook-handler.io/revision: 1e111caa1d2cc672b3b53c202b96a5f660a7e9b2 +spec: + entrypoint: foo + templates: + - name: foo + container: + image: "{{workflow.labels.image}}" + env: + - name: REPO + value: "{{workflow.annotations.k8s-webhook-handler.io/repo}}" + - name: REVISION + value: "{{workflow.annotations.k8s-webhook-handler.io/revision}}" + command: [sh, -c] + args: ["echo hello world"] +` + +func TestMetadataPassing(t *testing.T) { + controller := newController() + wfcset := controller.wfclientset.ArgoprojV1alpha1().Workflows("") + wf := unmarshalWF(metadataTemplate) + wf, err := wfcset.Create(wf) + assert.Nil(t, err) + wf, err = wfcset.Get(wf.ObjectMeta.Name, metav1.GetOptions{}) + assert.Nil(t, err) + woc := newWorkflowOperationCtx(wf, controller) + woc.operate() + assert.Equal(t, wfv1.NodeRunning, woc.wf.Status.Phase) + pods, err := controller.kubeclientset.CoreV1().Pods(wf.ObjectMeta.Namespace).List(metav1.ListOptions{}) + assert.Nil(t, err) + assert.True(t, len(pods.Items) > 0, "pod was not created successfully") + + var ( + pod = pods.Items[0] + container = pod.Spec.Containers[1] + foundRepo = false + foundRev = false + ) + for _, ev := range container.Env { + switch ev.Name { + case "REPO": + assert.Equal(t, "git@github.com:argoproj/argo.git", ev.Value) + foundRepo = true + case "REVISION": + assert.Equal(t, "1e111caa1d2cc672b3b53c202b96a5f660a7e9b2", ev.Value) + foundRev = true + } + } + assert.True(t, foundRepo) + assert.True(t, foundRev) + assert.Equal(t, "foo:bar", container.Image) +} + +var ioPathPlaceholders = ` +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: artifact-path-placeholders- +spec: + entrypoint: head-lines + arguments: + parameters: + - name: lines-count + value: 3 + artifacts: + - name: text + raw: + data: | + 1 + 2 + 3 + 4 + 5 + templates: + - name: head-lines + inputs: + parameters: + - name: lines-count + artifacts: + - name: text + path: /inputs/text/data + outputs: + parameters: + - name: actual-lines-count + valueFrom: + path: /outputs/actual-lines-count/data + artifacts: + - name: text + path: /outputs/text/data + container: + image: busybox + command: [sh, -c, 'head -n {{inputs.parameters.lines-count}} <"{{inputs.artifacts.text.path}}" | tee "{{outputs.artifacts.text.path}}" | wc -l > "{{outputs.parameters.actual-lines-count.path}}"'] +` + +func TestResolveIOPathPlaceholders(t *testing.T) { + wf := unmarshalWF(ioPathPlaceholders) + woc := newWoc(*wf) + woc.controller.Config.ArtifactRepository.S3 = new(S3ArtifactRepository) + woc.operate() + assert.Equal(t, wfv1.NodeRunning, woc.wf.Status.Phase) + pods, err := woc.controller.kubeclientset.CoreV1().Pods(wf.ObjectMeta.Namespace).List(metav1.ListOptions{}) + assert.Nil(t, err) + assert.True(t, len(pods.Items) > 0, "pod was not created successfully") + + assert.Equal(t, []string{"sh", "-c", "head -n 3 <\"/inputs/text/data\" | tee \"/outputs/text/data\" | wc -l > \"/outputs/actual-lines-count/data\""}, pods.Items[0].Spec.Containers[1].Command) +} diff --git a/workflow/controller/scope.go b/workflow/controller/scope.go index 2d1783acade5..f44c51f2d1ee 100644 --- a/workflow/controller/scope.go +++ b/workflow/controller/scope.go @@ -3,8 +3,8 @@ package controller import ( "strings" - "github.com/argoproj/argo/errors" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" ) // wfScope contains the current scope of variables available when executing a template diff --git a/workflow/controller/steps.go b/workflow/controller/steps.go index 571206cfd84b..bb9e6a175081 100644 --- a/workflow/controller/steps.go +++ b/workflow/controller/steps.go @@ -6,9 +6,9 @@ import ( "strings" "github.com/Knetic/govaluate" - "github.com/argoproj/argo/errors" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/workflow/common" + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/common" "github.com/valyala/fasttemplate" ) @@ -28,7 +28,7 @@ func (woc *wfOperationCtx) executeSteps(nodeName string, tmpl *wfv1.Template, bo } defer func() { if woc.wf.Status.Nodes[node.ID].Completed() { - _ = woc.killDeamonedChildren(node.ID) + _ = woc.killDaemonedChildren(node.ID) } }() stepsCtx := stepsContext{ @@ -155,6 +155,7 @@ func (woc *wfOperationCtx) executeStepGroup(stepGroup []wfv1.WorkflowStep, sgNod woc.log.Debugf("Step group node %v already marked completed", node) return node } + // First, resolve any references to outputs from previous steps, and perform substitution stepGroup, err := woc.resolveReferences(stepGroup, stepsCtx.scope) if err != nil { @@ -167,6 +168,9 @@ func (woc *wfOperationCtx) executeStepGroup(stepGroup []wfv1.WorkflowStep, sgNod return woc.markNodeError(sgNodeName, err) } + // Maps nodes to their steps + nodeSteps := make(map[string]wfv1.WorkflowStep) + // Kick off all parallel steps in the group for _, step := range stepGroup { childNodeName := fmt.Sprintf("%s.%s", sgNodeName, step.Name) @@ -202,10 +206,8 @@ func (woc *wfOperationCtx) executeStepGroup(stepGroup []wfv1.WorkflowStep, sgNod } } if childNode != nil { + nodeSteps[childNodeName] = step woc.addChildNode(sgNodeName, childNodeName) - if childNode.Completed() && !childNode.Successful() { - break - } } } @@ -219,7 +221,8 @@ func (woc *wfOperationCtx) executeStepGroup(stepGroup []wfv1.WorkflowStep, sgNod // All children completed. Determine step group status as a whole for _, childNodeID := range node.Children { childNode := woc.wf.Status.Nodes[childNodeID] - if !childNode.Successful() { + step := nodeSteps[childNode.Name] + if !childNode.Successful() && !step.ContinuesOn(childNode.Phase) { failMessage := fmt.Sprintf("child '%s' failed", childNodeID) woc.log.Infof("Step group node %s deemed failed: %s", node, failMessage) return woc.markNodePhase(node.Name, wfv1.NodeFailed, failMessage) @@ -275,6 +278,12 @@ func shouldExecute(when string) (bool, error) { func (woc *wfOperationCtx) resolveReferences(stepGroup []wfv1.WorkflowStep, scope *wfScope) ([]wfv1.WorkflowStep, error) { newStepGroup := make([]wfv1.WorkflowStep, len(stepGroup)) + // Step 0: replace all parameter scope references for volumes + err := woc.substituteParamsInVolumes(scope.replaceMap()) + if err != nil { + return nil, err + } + for i, step := range stepGroup { // Step 1: replace all parameter scope references in the step // TODO: improve this @@ -282,15 +291,8 @@ func (woc *wfOperationCtx) resolveReferences(stepGroup []wfv1.WorkflowStep, scop if err != nil { return nil, errors.InternalWrapError(err) } - replaceMap := make(map[string]string) - for key, val := range scope.scope { - valStr, ok := val.(string) - if ok { - replaceMap[key] = valStr - } - } fstTmpl := fasttemplate.New(string(stepBytes), "{{", "}}") - newStepStr, err := common.Replace(fstTmpl, replaceMap, true) + newStepStr, err := common.Replace(fstTmpl, scope.replaceMap(), true) if err != nil { return nil, err } diff --git a/workflow/controller/steps_test.go b/workflow/controller/steps_test.go new file mode 100644 index 000000000000..5f55d556a8f8 --- /dev/null +++ b/workflow/controller/steps_test.go @@ -0,0 +1,18 @@ +package controller + +import ( + "testing" + + "github.com/stretchr/testify/assert" + + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/test" +) + +// TestStepsFailedRetries ensures a steps template will recognize exhausted retries +func TestStepsFailedRetries(t *testing.T) { + wf := test.LoadTestWorkflow("testdata/steps-failed-retries.yaml") + woc := newWoc(*wf) + woc.operate() + assert.Equal(t, string(wfv1.NodeFailed), string(woc.wf.Status.Phase)) +} diff --git a/workflow/controller/suspend.go b/workflow/controller/suspend.go index 7a60bdc9eded..f12d44edcb45 100644 --- a/workflow/controller/suspend.go +++ b/workflow/controller/suspend.go @@ -1,7 +1,7 @@ package controller import ( - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" ) func (woc *wfOperationCtx) executeSuspend(nodeName string, tmpl *wfv1.Template, boundaryID string) *wfv1.NodeStatus { diff --git a/workflow/controller/testdata/steps-failed-retries.yaml b/workflow/controller/testdata/steps-failed-retries.yaml new file mode 100644 index 000000000000..bd249586e311 --- /dev/null +++ b/workflow/controller/testdata/steps-failed-retries.yaml @@ -0,0 +1,153 @@ +metadata: + creationTimestamp: "2018-12-28T19:21:20Z" + generateName: failed-retries- + generation: 1 + labels: + workflows.argoproj.io/phase: Running + name: failed-retries-tjjsc + namespace: default + resourceVersion: "85216" + selfLink: /apis/argoproj.io/v1alpha1/namespaces/default/workflows/failed-retries-tjjsc + uid: c18bba2a-0ad5-11e9-b44e-ea782c392741 +spec: + arguments: {} + entrypoint: failed-retries + templates: + - inputs: {} + metadata: {} + name: failed-retries + outputs: {} + steps: + - - arguments: {} + name: fail + template: fail + - arguments: {} + name: delayed-fail + template: delayed-fail + - container: + args: + - exit 1 + command: + - sh + - -c + image: alpine:latest + name: "" + resources: {} + inputs: {} + metadata: {} + name: fail + outputs: {} + retryStrategy: + limit: 1 + - container: + args: + - sleep 1; exit 1 + command: + - sh + - -c + image: alpine:latest + name: "" + resources: {} + inputs: {} + metadata: {} + name: delayed-fail + outputs: {} + retryStrategy: + limit: 1 +status: + finishedAt: null + nodes: + failed-retries-tjjsc: + children: + - failed-retries-tjjsc-2095973878 + displayName: failed-retries-tjjsc + finishedAt: null + id: failed-retries-tjjsc + name: failed-retries-tjjsc + phase: Running + startedAt: "2019-01-03T01:23:18Z" + templateName: failed-retries + type: Steps + failed-retries-tjjsc-20069324: + boundaryID: failed-retries-tjjsc + children: + - failed-retries-tjjsc-1229492679 + - failed-retries-tjjsc-759866442 + displayName: fail + finishedAt: "2019-01-03T01:23:32Z" + id: failed-retries-tjjsc-20069324 + message: No more retries left + name: failed-retries-tjjsc[0].fail + phase: Failed + startedAt: "2019-01-03T01:23:18Z" + type: Retry + failed-retries-tjjsc-759866442: + boundaryID: failed-retries-tjjsc + displayName: fail(1) + finishedAt: "2018-12-28T19:21:32Z" + id: failed-retries-tjjsc-759866442 + message: failed with exit code 1 + name: failed-retries-tjjsc[0].fail(1) + phase: Failed + startedAt: "2019-01-03T01:23:27Z" + templateName: fail + type: Pod + failed-retries-tjjsc-1229492679: + boundaryID: failed-retries-tjjsc + displayName: fail(0) + finishedAt: "2018-12-28T19:21:26Z" + id: failed-retries-tjjsc-1229492679 + message: failed with exit code 1 + name: failed-retries-tjjsc[0].fail(0) + phase: Failed + startedAt: "2019-01-03T01:23:18Z" + templateName: fail + type: Pod + failed-retries-tjjsc-1375221696: + boundaryID: failed-retries-tjjsc + displayName: delayed-fail(0) + finishedAt: "2018-12-28T19:21:27Z" + id: failed-retries-tjjsc-1375221696 + message: failed with exit code 1 + name: failed-retries-tjjsc[0].delayed-fail(0) + phase: Failed + startedAt: "2019-01-03T01:23:18Z" + templateName: delayed-fail + type: Pod + failed-retries-tjjsc-1574533273: + boundaryID: failed-retries-tjjsc + children: + - failed-retries-tjjsc-1375221696 + - failed-retries-tjjsc-2113289837 + displayName: delayed-fail + finishedAt: null + id: failed-retries-tjjsc-1574533273 + name: failed-retries-tjjsc[0].delayed-fail + phase: Running + startedAt: "2019-01-03T01:23:18Z" + type: Retry + failed-retries-tjjsc-2095973878: + boundaryID: failed-retries-tjjsc + children: + - failed-retries-tjjsc-20069324 + - failed-retries-tjjsc-1574533273 + displayName: '[0]' + finishedAt: null + id: failed-retries-tjjsc-2095973878 + name: failed-retries-tjjsc[0] + phase: Running + startedAt: "2019-01-03T01:23:18Z" + type: StepGroup + failed-retries-tjjsc-2113289837: + boundaryID: failed-retries-tjjsc + displayName: delayed-fail(1) + finishedAt: "2018-12-28T19:21:33Z" + id: failed-retries-tjjsc-2113289837 + message: failed with exit code 1 + name: failed-retries-tjjsc[0].delayed-fail(1) + phase: Failed + startedAt: "2019-01-03T01:23:28Z" + templateName: delayed-fail + type: Pod + phase: Running + startedAt: "2019-01-03T01:23:18Z" diff --git a/workflow/controller/workflowpod.go b/workflow/controller/workflowpod.go index 7b794832cb48..28dfc3e51418 100644 --- a/workflow/controller/workflowpod.go +++ b/workflow/controller/workflowpod.go @@ -5,16 +5,19 @@ import ( "fmt" "io" "path" + "path/filepath" "strconv" - "github.com/argoproj/argo/errors" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/workflow/common" log "github.com/sirupsen/logrus" "github.com/valyala/fasttemplate" apiv1 "k8s.io/api/core/v1" apierr "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/utils/pointer" + + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/common" ) // Reusable k8s pod spec portions used in workflow pods @@ -43,27 +46,8 @@ var ( MountPath: common.PodMetadataMountPath, } - hostPathDir = apiv1.HostPathDirectory hostPathSocket = apiv1.HostPathSocket - // volumeDockerLib provides the wait container access to the minion's host docker containers - // runtime files (e.g. /var/lib/docker/container). This is used by the executor to access - // the main container's logs (and potentially storage to upload output artifacts) - volumeDockerLib = apiv1.Volume{ - Name: common.DockerLibVolumeName, - VolumeSource: apiv1.VolumeSource{ - HostPath: &apiv1.HostPathVolumeSource{ - Path: common.DockerLibHostPath, - Type: &hostPathDir, - }, - }, - } - volumeMountDockerLib = apiv1.VolumeMount{ - Name: volumeDockerLib.Name, - MountPath: volumeDockerLib.VolumeSource.HostPath.Path, - ReadOnly: true, - } - // volumeDockerSock provides the wait container direct access to the minion's host docker daemon. // The primary purpose of this is to make available `docker cp` to collect an output artifact // from a container. Alternatively, we could use `kubectl cp`, but `docker cp` avoids the extra @@ -82,26 +66,8 @@ var ( MountPath: "/var/run/docker.sock", ReadOnly: true, } - - // execEnvVars exposes various pod information as environment variables to the exec container - execEnvVars = []apiv1.EnvVar{ - envFromField(common.EnvVarPodName, "metadata.name"), - } ) -// envFromField is a helper to return a EnvVar with the name and field -func envFromField(envVarName, fieldPath string) apiv1.EnvVar { - return apiv1.EnvVar{ - Name: envVarName, - ValueFrom: &apiv1.EnvVarSource{ - FieldRef: &apiv1.ObjectFieldSelector{ - APIVersion: "v1", - FieldPath: fieldPath, - }, - }, - } -} - func (woc *wfOperationCtx) createWorkflowPod(nodeName string, mainCtr apiv1.Container, tmpl *wfv1.Template) (*apiv1.Pod, error) { nodeID := woc.wf.NodeID(nodeName) woc.log.Debugf("Creating Pod: %s (%s)", nodeName, nodeID) @@ -124,19 +90,37 @@ func (woc *wfOperationCtx) createWorkflowPod(nodeName string, mainCtr apiv1.Cont }, }, Spec: apiv1.PodSpec{ - RestartPolicy: apiv1.RestartPolicyNever, - Containers: []apiv1.Container{ - mainCtr, - }, + RestartPolicy: apiv1.RestartPolicyNever, Volumes: woc.createVolumes(), ActiveDeadlineSeconds: tmpl.ActiveDeadlineSeconds, ServiceAccountName: woc.wf.Spec.ServiceAccountName, ImagePullSecrets: woc.wf.Spec.ImagePullSecrets, }, } + + if woc.wf.Spec.HostNetwork != nil { + pod.Spec.HostNetwork = *woc.wf.Spec.HostNetwork + } + + if woc.wf.Spec.DNSPolicy != nil { + pod.Spec.DNSPolicy = *woc.wf.Spec.DNSPolicy + } + + if woc.wf.Spec.DNSConfig != nil { + pod.Spec.DNSConfig = woc.wf.Spec.DNSConfig + } + if woc.controller.Config.InstanceID != "" { pod.ObjectMeta.Labels[common.LabelKeyControllerInstanceID] = woc.controller.Config.InstanceID } + if woc.controller.Config.ContainerRuntimeExecutor == common.ContainerRuntimeExecutorPNS { + pod.Spec.ShareProcessNamespace = pointer.BoolPtr(true) + } + + err := woc.addArchiveLocation(pod, tmpl) + if err != nil { + return nil, err + } if tmpl.GetType() != wfv1.TemplateTypeResource { // we do not need the wait container for resource templates because @@ -148,6 +132,11 @@ func (woc *wfOperationCtx) createWorkflowPod(nodeName string, mainCtr apiv1.Cont } pod.Spec.Containers = append(pod.Spec.Containers, *waitCtr) } + // NOTE: the order of the container list is significant. kubelet will pull, create, and start + // each container sequentially in the order that they appear in this list. For PNS we want the + // wait container to start before the main, so that it always has the chance to see the main + // container's PID and root filesystem. + pod.Spec.Containers = append(pod.Spec.Containers, mainCtr) // Add init container only if it needs input artifacts. This is also true for // script templates (which needs to populate the script) @@ -159,7 +148,7 @@ func (woc *wfOperationCtx) createWorkflowPod(nodeName string, mainCtr apiv1.Cont addSchedulingConstraints(pod, wfSpec, tmpl) woc.addMetadata(pod, tmpl) - err := addVolumeReferences(pod, wfSpec, tmpl, woc.wf.Status.PersistentVolumeClaims) + err = addVolumeReferences(pod, woc.volumes, tmpl, woc.wf.Status.PersistentVolumeClaims) if err != nil { return nil, err } @@ -169,21 +158,21 @@ func (woc *wfOperationCtx) createWorkflowPod(nodeName string, mainCtr apiv1.Cont return nil, err } - err = woc.addArchiveLocation(pod, tmpl) - if err != nil { - return nil, err - } - if tmpl.GetType() == wfv1.TemplateTypeScript { - addExecutorStagingVolume(pod) + addScriptStagingVolume(pod) } - // addSidecars should be called after all volumes have been manipulated - // in the main container (in case sidecar requires volume mount mirroring) + // addInitContainers, addSidecars and addOutputArtifactsVolumes should be called after all + // volumes have been manipulated in the main container since volumeMounts are mirrored + err = addInitContainers(pod, tmpl) + if err != nil { + return nil, err + } err = addSidecars(pod, tmpl) if err != nil { return nil, err } + addOutputArtifactsVolumes(pod, tmpl) // Set the container template JSON in pod annotations, which executor examines for things like // artifact location/path. @@ -194,8 +183,16 @@ func (woc *wfOperationCtx) createWorkflowPod(nodeName string, mainCtr apiv1.Cont pod.ObjectMeta.Annotations[common.AnnotationKeyTemplate] = string(tmplBytes) // Perform one last variable substitution here. Some variables come from the from workflow - // configmap (e.g. archive location), and were not substituted in executeTemplate. - pod, err = substituteGlobals(pod, woc.globalParams) + // configmap (e.g. archive location) or volumes attribute, and were not substituted + // in executeTemplate. + podParams := make(map[string]string) + for gkey, gval := range woc.globalParams { + podParams[gkey] = gval + } + for _, inParam := range tmpl.Inputs.Parameters { + podParams["inputs.parameters."+inParam.Name] = *inParam.Value + } + pod, err = substitutePodParams(pod, podParams) if err != nil { return nil, err } @@ -231,20 +228,20 @@ func (woc *wfOperationCtx) createWorkflowPod(nodeName string, mainCtr apiv1.Cont return created, nil } -// substituteGlobals returns a pod spec with global parameter references substituted as well as pod.name -func substituteGlobals(pod *apiv1.Pod, globalParams map[string]string) (*apiv1.Pod, error) { - newGlobalParams := make(map[string]string) - for k, v := range globalParams { - newGlobalParams[k] = v +// substitutePodParams returns a pod spec with parameter references substituted as well as pod.name +func substitutePodParams(pod *apiv1.Pod, podParams map[string]string) (*apiv1.Pod, error) { + newPodParams := make(map[string]string) + for k, v := range podParams { + newPodParams[k] = v } - newGlobalParams[common.LocalVarPodName] = pod.Name - globalParams = newGlobalParams + newPodParams[common.LocalVarPodName] = pod.Name + podParams = newPodParams specBytes, err := json.Marshal(pod) if err != nil { return nil, err } fstTmpl := fasttemplate.New(string(specBytes), "{{", "}}") - newSpecBytes, err := common.Replace(fstTmpl, globalParams, true) + newSpecBytes, err := common.Replace(fstTmpl, podParams, true) if err != nil { return nil, err } @@ -257,34 +254,79 @@ func substituteGlobals(pod *apiv1.Pod, globalParams map[string]string) (*apiv1.P } func (woc *wfOperationCtx) newInitContainer(tmpl *wfv1.Template) apiv1.Container { - ctr := woc.newExecContainer(common.InitContainerName, false) - ctr.Command = []string{"argoexec"} - ctr.Args = []string{"init"} - ctr.VolumeMounts = []apiv1.VolumeMount{ - volumeMountPodMetadata, - } + ctr := woc.newExecContainer(common.InitContainerName) + ctr.Command = []string{"argoexec", "init"} return *ctr } func (woc *wfOperationCtx) newWaitContainer(tmpl *wfv1.Template) (*apiv1.Container, error) { - ctr := woc.newExecContainer(common.WaitContainerName, false) - ctr.Command = []string{"argoexec"} - ctr.Args = []string{"wait"} - ctr.VolumeMounts = woc.createVolumeMounts() + ctr := woc.newExecContainer(common.WaitContainerName) + ctr.Command = []string{"argoexec", "wait"} + switch woc.controller.Config.ContainerRuntimeExecutor { + case common.ContainerRuntimeExecutorPNS: + ctr.SecurityContext = &apiv1.SecurityContext{ + Capabilities: &apiv1.Capabilities{ + Add: []apiv1.Capability{ + // necessary to access main's root filesystem when run with a different user id + apiv1.Capability("SYS_PTRACE"), + }, + }, + } + if hasPrivilegedContainers(tmpl) { + // if the main or sidecar is privileged, the wait sidecar must also run privileged, + // in order to SIGTERM/SIGKILL the pid + ctr.SecurityContext.Privileged = pointer.BoolPtr(true) + } + case "", common.ContainerRuntimeExecutorDocker: + ctr.VolumeMounts = append(ctr.VolumeMounts, volumeMountDockerSock) + } return ctr, nil } +// hasPrivilegedContainers tests if the main container or sidecars is privileged +func hasPrivilegedContainers(tmpl *wfv1.Template) bool { + if containerIsPrivileged(tmpl.Container) { + return true + } + for _, side := range tmpl.Sidecars { + if containerIsPrivileged(&side.Container) { + return true + } + } + return false +} + +func containerIsPrivileged(ctr *apiv1.Container) bool { + if ctr != nil && ctr.SecurityContext != nil && ctr.SecurityContext.Privileged != nil && *ctr.SecurityContext.Privileged { + return true + } + return false +} + func (woc *wfOperationCtx) createEnvVars() []apiv1.EnvVar { + var execEnvVars []apiv1.EnvVar + execEnvVars = append(execEnvVars, apiv1.EnvVar{ + Name: common.EnvVarPodName, + ValueFrom: &apiv1.EnvVarSource{ + FieldRef: &apiv1.ObjectFieldSelector{ + APIVersion: "v1", + FieldPath: "metadata.name", + }, + }, + }) + if woc.controller.Config.Executor != nil { + execEnvVars = append(execEnvVars, woc.controller.Config.Executor.Env...) + } switch woc.controller.Config.ContainerRuntimeExecutor { case common.ContainerRuntimeExecutorK8sAPI: - return append(execEnvVars, + execEnvVars = append(execEnvVars, apiv1.EnvVar{ Name: common.EnvVarContainerRuntimeExecutor, Value: woc.controller.Config.ContainerRuntimeExecutor, }, ) case common.ContainerRuntimeExecutorKubelet: - return append(execEnvVars, + execEnvVars = append(execEnvVars, apiv1.EnvVar{ Name: common.EnvVarContainerRuntimeExecutor, Value: woc.controller.Config.ContainerRuntimeExecutor, @@ -306,51 +348,87 @@ func (woc *wfOperationCtx) createEnvVars() []apiv1.EnvVar { Value: strconv.FormatBool(woc.controller.Config.KubeletInsecure), }, ) - default: - return execEnvVars - } -} - -func (woc *wfOperationCtx) createVolumeMounts() []apiv1.VolumeMount { - volumeMounts := []apiv1.VolumeMount{ - volumeMountPodMetadata, - } - switch woc.controller.Config.ContainerRuntimeExecutor { - case common.ContainerRuntimeExecutorKubelet: - return volumeMounts - default: - return append(volumeMounts, volumeMountDockerLib, volumeMountDockerSock) + case common.ContainerRuntimeExecutorPNS: + execEnvVars = append(execEnvVars, + apiv1.EnvVar{ + Name: common.EnvVarContainerRuntimeExecutor, + Value: woc.controller.Config.ContainerRuntimeExecutor, + }, + ) } + return execEnvVars } func (woc *wfOperationCtx) createVolumes() []apiv1.Volume { volumes := []apiv1.Volume{ volumePodMetadata, } + if woc.controller.Config.KubeConfig != nil { + name := woc.controller.Config.KubeConfig.VolumeName + if name == "" { + name = common.KubeConfigDefaultVolumeName + } + volumes = append(volumes, apiv1.Volume{ + Name: name, + VolumeSource: apiv1.VolumeSource{ + Secret: &apiv1.SecretVolumeSource{ + SecretName: woc.controller.Config.KubeConfig.SecretName, + }, + }, + }) + } switch woc.controller.Config.ContainerRuntimeExecutor { - case common.ContainerRuntimeExecutorKubelet: + case common.ContainerRuntimeExecutorKubelet, common.ContainerRuntimeExecutorK8sAPI, common.ContainerRuntimeExecutorPNS: return volumes default: - return append(volumes, volumeDockerLib, volumeDockerSock) + return append(volumes, volumeDockerSock) } } -func (woc *wfOperationCtx) newExecContainer(name string, privileged bool) *apiv1.Container { +func (woc *wfOperationCtx) newExecContainer(name string) *apiv1.Container { exec := apiv1.Container{ Name: name, Image: woc.controller.executorImage(), ImagePullPolicy: woc.controller.executorImagePullPolicy(), Env: woc.createEnvVars(), - SecurityContext: &apiv1.SecurityContext{ - Privileged: &privileged, + VolumeMounts: []apiv1.VolumeMount{ + volumeMountPodMetadata, }, } - if woc.controller.Config.ExecutorResources != nil { + if woc.controller.Config.Executor != nil { + exec.Args = woc.controller.Config.Executor.Args + } + if isResourcesSpecified(woc.controller.Config.Executor) { + exec.Resources = woc.controller.Config.Executor.Resources + } else if woc.controller.Config.ExecutorResources != nil { exec.Resources = *woc.controller.Config.ExecutorResources } + if woc.controller.Config.KubeConfig != nil { + path := woc.controller.Config.KubeConfig.MountPath + if path == "" { + path = common.KubeConfigDefaultMountPath + } + name := woc.controller.Config.KubeConfig.VolumeName + if name == "" { + name = common.KubeConfigDefaultVolumeName + } + exec.VolumeMounts = []apiv1.VolumeMount{{ + Name: name, + MountPath: path, + ReadOnly: true, + SubPath: woc.controller.Config.KubeConfig.SecretKey, + }, + } + exec.Args = append(exec.Args, "--kubeconfig="+path) + } + return &exec } +func isResourcesSpecified(ctr *apiv1.Container) bool { + return ctr != nil && (ctr.Resources.Limits.Cpu() != nil || ctr.Resources.Limits.Memory() != nil) +} + // addMetadata applies metadata specified in the template func (woc *wfOperationCtx) addMetadata(pod *apiv1.Pod, tmpl *wfv1.Template) { for k, v := range tmpl.Metadata.Annotations { @@ -391,11 +469,36 @@ func addSchedulingConstraints(pod *apiv1.Pod, wfSpec *wfv1.WorkflowSpec, tmpl *w } else if len(wfSpec.Tolerations) > 0 { pod.Spec.Tolerations = wfSpec.Tolerations } + + // Set scheduler name (if specified) + if tmpl.SchedulerName != "" { + pod.Spec.SchedulerName = tmpl.SchedulerName + } else if wfSpec.SchedulerName != "" { + pod.Spec.SchedulerName = wfSpec.SchedulerName + } + // Set priorityClass (if specified) + if tmpl.PriorityClassName != "" { + pod.Spec.PriorityClassName = tmpl.PriorityClassName + } else if wfSpec.PodPriorityClassName != "" { + pod.Spec.PriorityClassName = wfSpec.PodPriorityClassName + } + // Set priority (if specified) + if tmpl.Priority != nil { + pod.Spec.Priority = tmpl.Priority + } else if wfSpec.PodPriority != nil { + pod.Spec.Priority = wfSpec.PodPriority + } + // Set schedulerName (if specified) + if tmpl.SchedulerName != "" { + pod.Spec.SchedulerName = tmpl.SchedulerName + } else if wfSpec.SchedulerName != "" { + pod.Spec.SchedulerName = wfSpec.SchedulerName + } } // addVolumeReferences adds any volumeMounts that a container/sidecar is referencing, to the pod.spec.volumes // These are either specified in the workflow.spec.volumes or the workflow.spec.volumeClaimTemplate section -func addVolumeReferences(pod *apiv1.Pod, wfSpec *wfv1.WorkflowSpec, tmpl *wfv1.Template, pvcs []apiv1.Volume) error { +func addVolumeReferences(pod *apiv1.Pod, vols []apiv1.Volume, tmpl *wfv1.Template, pvcs []apiv1.Volume) error { switch tmpl.GetType() { case wfv1.TemplateTypeContainer, wfv1.TemplateTypeScript: default: @@ -404,7 +507,7 @@ func addVolumeReferences(pod *apiv1.Pod, wfSpec *wfv1.WorkflowSpec, tmpl *wfv1.T // getVolByName is a helper to retrieve a volume by its name, either from the volumes or claims section getVolByName := func(name string) *apiv1.Volume { - for _, vol := range wfSpec.Volumes { + for _, vol := range vols { if vol.Name == name { return &vol } @@ -439,6 +542,7 @@ func addVolumeReferences(pod *apiv1.Pod, wfSpec *wfv1.WorkflowSpec, tmpl *wfv1.T } return nil } + if tmpl.Container != nil { err := addVolumeRef(tmpl.Container.VolumeMounts) if err != nil { @@ -451,12 +555,30 @@ func addVolumeReferences(pod *apiv1.Pod, wfSpec *wfv1.WorkflowSpec, tmpl *wfv1.T return err } } + for _, sidecar := range tmpl.Sidecars { err := addVolumeRef(sidecar.VolumeMounts) if err != nil { return err } } + + volumes, volumeMounts := createSecretVolumes(tmpl) + pod.Spec.Volumes = append(pod.Spec.Volumes, volumes...) + + for idx, container := range pod.Spec.Containers { + if container.Name == common.WaitContainerName { + pod.Spec.Containers[idx].VolumeMounts = append(pod.Spec.Containers[idx].VolumeMounts, volumeMounts...) + break + } + } + for idx, container := range pod.Spec.InitContainers { + if container.Name == common.InitContainerName { + pod.Spec.InitContainers[idx].VolumeMounts = append(pod.Spec.InitContainers[idx].VolumeMounts, volumeMounts...) + break + } + } + return nil } @@ -500,7 +622,7 @@ func (woc *wfOperationCtx) addInputArtifactsVolumes(pod *apiv1.Pod, tmpl *wfv1.T // instead of the artifacts volume if tmpl.Container != nil { for _, mnt := range tmpl.Container.VolumeMounts { - mnt.MountPath = path.Join(common.InitContainerMainFilesystemDir, mnt.MountPath) + mnt.MountPath = filepath.Join(common.ExecutorMainFilesystemDir, mnt.MountPath) initCtr.VolumeMounts = append(initCtr.VolumeMounts, mnt) } } @@ -509,19 +631,19 @@ func (woc *wfOperationCtx) addInputArtifactsVolumes(pod *apiv1.Pod, tmpl *wfv1.T } } - mainCtrIndex := 0 - var mainCtr *apiv1.Container + mainCtrIndex := -1 for i, ctr := range pod.Spec.Containers { - if ctr.Name == common.MainContainerName { + switch ctr.Name { + case common.MainContainerName: mainCtrIndex = i - mainCtr = &pod.Spec.Containers[i] + break } } - if mainCtr == nil { - panic("Could not find main container in pod spec") + if mainCtrIndex == -1 { + panic("Could not find main or wait container in pod spec") } - // TODO: the order in which we construct the volume mounts may matter, - // especially if they are overlapping. + mainCtr := &pod.Spec.Containers[mainCtrIndex] + for _, art := range tmpl.Inputs.Artifacts { if art.Path == "" { return errors.Errorf(errors.CodeBadRequest, "inputs.artifacts.%s did not specify a path", art.Name) @@ -549,31 +671,77 @@ func (woc *wfOperationCtx) addInputArtifactsVolumes(pod *apiv1.Pod, tmpl *wfv1.T return nil } -// addArchiveLocation updates the template with the default artifact repository information -// configured in the controller. This is skipped for templates which have explicitly set an archive -// location in the template. -func (woc *wfOperationCtx) addArchiveLocation(pod *apiv1.Pod, tmpl *wfv1.Template) error { - if tmpl.ArchiveLocation == nil { - tmpl.ArchiveLocation = &wfv1.ArtifactLocation{ - ArchiveLogs: woc.controller.Config.ArtifactRepository.ArchiveLogs, +// addOutputArtifactsVolumes mirrors any volume mounts in the main container to the wait sidecar. +// For any output artifacts that were produced in mounted volumes (e.g. PVCs, emptyDirs), the +// wait container will collect the artifacts directly from volumeMount instead of `docker cp`-ing +// them to the wait sidecar. In order for this to work, we mirror all volume mounts in the main +// container under a well-known path. +func addOutputArtifactsVolumes(pod *apiv1.Pod, tmpl *wfv1.Template) { + if tmpl.GetType() == wfv1.TemplateTypeResource { + return + } + mainCtrIndex := -1 + waitCtrIndex := -1 + var mainCtr *apiv1.Container + for i, ctr := range pod.Spec.Containers { + switch ctr.Name { + case common.MainContainerName: + mainCtrIndex = i + case common.WaitContainerName: + waitCtrIndex = i } } - if tmpl.ArchiveLocation.S3 != nil || tmpl.ArchiveLocation.Artifactory != nil { - // User explicitly set the location. nothing else to do. - return nil + if mainCtrIndex == -1 || waitCtrIndex == -1 { + panic("Could not find main or wait container in pod spec") + } + mainCtr = &pod.Spec.Containers[mainCtrIndex] + waitCtr := &pod.Spec.Containers[waitCtrIndex] + + for _, mnt := range mainCtr.VolumeMounts { + mnt.MountPath = filepath.Join(common.ExecutorMainFilesystemDir, mnt.MountPath) + // ReadOnly is needed to be false for overlapping volume mounts + mnt.ReadOnly = false + waitCtr.VolumeMounts = append(waitCtr.VolumeMounts, mnt) } + pod.Spec.Containers[waitCtrIndex] = *waitCtr +} + +// addArchiveLocation conditionally updates the template with the default artifact repository +// information configured in the controller, for the purposes of archiving outputs. This is skipped +// for templates which do not need to archive anything, or have explicitly set an archive location +// in the template. +func (woc *wfOperationCtx) addArchiveLocation(pod *apiv1.Pod, tmpl *wfv1.Template) error { // needLocation keeps track if the workflow needs to have an archive location set. // If so, and one was not supplied (or defaulted), we will return error var needLocation bool - if tmpl.ArchiveLocation.ArchiveLogs != nil && *tmpl.ArchiveLocation.ArchiveLogs { - needLocation = true - } + if tmpl.ArchiveLocation != nil { + if tmpl.ArchiveLocation.S3 != nil || tmpl.ArchiveLocation.Artifactory != nil || tmpl.ArchiveLocation.HDFS != nil { + // User explicitly set the location. nothing else to do. + return nil + } + if tmpl.ArchiveLocation.ArchiveLogs != nil && *tmpl.ArchiveLocation.ArchiveLogs { + needLocation = true + } + } + for _, art := range tmpl.Outputs.Artifacts { + if !art.HasLocation() { + needLocation = true + break + } + } + if !needLocation { + woc.log.Debugf("archive location unnecessary") + return nil + } + tmpl.ArchiveLocation = &wfv1.ArtifactLocation{ + ArchiveLogs: woc.controller.Config.ArtifactRepository.ArchiveLogs, + } // artifact location is defaulted using the following formula: // //.tgz // (e.g. myworkflowartifacts/argo-wf-fhljp/argo-wf-fhljp-123291312382/src.tgz) if s3Location := woc.controller.Config.ArtifactRepository.S3; s3Location != nil { - log.Debugf("Setting s3 artifact repository information") + woc.log.Debugf("Setting s3 artifact repository information") artLocationKey := s3Location.KeyFormat // NOTE: we use unresolved variables, will get substituted later if artLocationKey == "" { @@ -584,7 +752,7 @@ func (woc *wfOperationCtx) addArchiveLocation(pod *apiv1.Pod, tmpl *wfv1.Templat Key: artLocationKey, } } else if woc.controller.Config.ArtifactRepository.Artifactory != nil { - log.Debugf("Setting artifactory artifact repository information") + woc.log.Debugf("Setting artifactory artifact repository information") repoURL := "" if woc.controller.Config.ArtifactRepository.Artifactory.RepoURL != "" { repoURL = woc.controller.Config.ArtifactRepository.Artifactory.RepoURL + "/" @@ -594,23 +762,29 @@ func (woc *wfOperationCtx) addArchiveLocation(pod *apiv1.Pod, tmpl *wfv1.Templat ArtifactoryAuth: woc.controller.Config.ArtifactRepository.Artifactory.ArtifactoryAuth, URL: artURL, } - } else { - for _, art := range tmpl.Outputs.Artifacts { - if !art.HasLocation() { - needLocation = true - break - } + } else if hdfsLocation := woc.controller.Config.ArtifactRepository.HDFS; hdfsLocation != nil { + woc.log.Debugf("Setting HDFS artifact repository information") + tmpl.ArchiveLocation.HDFS = &wfv1.HDFSArtifact{ + HDFSConfig: hdfsLocation.HDFSConfig, + Path: hdfsLocation.PathFormat, + Force: hdfsLocation.Force, } - if needLocation { - return errors.Errorf(errors.CodeBadRequest, "controller is not configured with a default archive location") + } else if woc.controller.Config.ArtifactRepository.GCS != nil { + log.Debugf("Setting GCS artifact repository information") + artLocationKey := fmt.Sprintf("%s/%s", woc.wf.ObjectMeta.Name, pod.ObjectMeta.Name) + tmpl.ArchiveLocation.GCS = &wfv1.GCSArtifact{ + GCSBucket: woc.controller.Config.ArtifactRepository.GCS.GCSBucket, + Key: artLocationKey, } + } else { + return errors.Errorf(errors.CodeBadRequest, "controller is not configured with a default archive location") } return nil } -// addExecutorStagingVolume sets up a shared staging volume between the init container +// addScriptStagingVolume sets up a shared staging volume between the init container // and main container for the purpose of holding the script source code for script templates -func addExecutorStagingVolume(pod *apiv1.Pod) { +func addScriptStagingVolume(pod *apiv1.Pod) { volName := "argo-staging" stagingVol := apiv1.Volume{ Name: volName, @@ -638,11 +812,7 @@ func addExecutorStagingVolume(pod *apiv1.Pod) { Name: volName, MountPath: common.ExecutorStagingEmptyDir, } - if ctr.VolumeMounts == nil { - ctr.VolumeMounts = []apiv1.VolumeMount{volMount} - } else { - ctr.VolumeMounts = append(ctr.VolumeMounts, volMount) - } + ctr.VolumeMounts = append(ctr.VolumeMounts, volMount) pod.Spec.Containers[i] = ctr found = true break @@ -653,31 +823,40 @@ func addExecutorStagingVolume(pod *apiv1.Pod) { } } +// addInitContainers adds all init containers to the pod spec of the step +// Optionally volume mounts from the main container to the init containers +func addInitContainers(pod *apiv1.Pod, tmpl *wfv1.Template) error { + if len(tmpl.InitContainers) == 0 { + return nil + } + mainCtr := findMainContainer(pod) + if mainCtr == nil { + panic("Unable to locate main container") + } + for _, ctr := range tmpl.InitContainers { + log.Debugf("Adding init container %s", ctr.Name) + if ctr.MirrorVolumeMounts != nil && *ctr.MirrorVolumeMounts { + mirrorVolumeMounts(mainCtr, &ctr.Container) + } + pod.Spec.InitContainers = append(pod.Spec.InitContainers, ctr.Container) + } + return nil +} + // addSidecars adds all sidecars to the pod spec of the step. // Optionally volume mounts from the main container to the sidecar func addSidecars(pod *apiv1.Pod, tmpl *wfv1.Template) error { if len(tmpl.Sidecars) == 0 { return nil } - var mainCtr *apiv1.Container - for _, ctr := range pod.Spec.Containers { - if ctr.Name != common.MainContainerName { - continue - } - mainCtr = &ctr - break - } + mainCtr := findMainContainer(pod) if mainCtr == nil { panic("Unable to locate main container") } for _, sidecar := range tmpl.Sidecars { + log.Debugf("Adding sidecar container %s", sidecar.Name) if sidecar.MirrorVolumeMounts != nil && *sidecar.MirrorVolumeMounts { - for _, volMnt := range mainCtr.VolumeMounts { - if sidecar.VolumeMounts == nil { - sidecar.VolumeMounts = make([]apiv1.VolumeMount, 0) - } - sidecar.VolumeMounts = append(sidecar.VolumeMounts, volMnt) - } + mirrorVolumeMounts(mainCtr, &sidecar.Container) } pod.Spec.Containers = append(pod.Spec.Containers, sidecar.Container) } @@ -698,3 +877,130 @@ func verifyResolvedVariables(obj interface{}) error { }) return unresolvedErr } + +// createSecretVolumes will retrieve and create Volumes and Volumemount object for Pod +func createSecretVolumes(tmpl *wfv1.Template) ([]apiv1.Volume, []apiv1.VolumeMount) { + var allVolumesMap = make(map[string]apiv1.Volume) + var uniqueKeyMap = make(map[string]bool) + var secretVolumes []apiv1.Volume + var secretVolMounts []apiv1.VolumeMount + + createArchiveLocationSecret(tmpl, allVolumesMap, uniqueKeyMap) + + for _, art := range tmpl.Outputs.Artifacts { + createSecretVolume(allVolumesMap, art, uniqueKeyMap) + } + for _, art := range tmpl.Inputs.Artifacts { + createSecretVolume(allVolumesMap, art, uniqueKeyMap) + } + + for volMountName, val := range allVolumesMap { + secretVolumes = append(secretVolumes, val) + secretVolMounts = append(secretVolMounts, apiv1.VolumeMount{ + Name: volMountName, + MountPath: common.SecretVolMountPath + "/" + val.Name, + ReadOnly: true, + }) + } + + return secretVolumes, secretVolMounts +} + +func createArchiveLocationSecret(tmpl *wfv1.Template, volMap map[string]apiv1.Volume, uniqueKeyMap map[string]bool) { + if tmpl.ArchiveLocation == nil { + return + } + if s3ArtRepo := tmpl.ArchiveLocation.S3; s3ArtRepo != nil { + createSecretVal(volMap, &s3ArtRepo.AccessKeySecret, uniqueKeyMap) + createSecretVal(volMap, &s3ArtRepo.SecretKeySecret, uniqueKeyMap) + } else if hdfsArtRepo := tmpl.ArchiveLocation.HDFS; hdfsArtRepo != nil { + createSecretVal(volMap, hdfsArtRepo.KrbKeytabSecret, uniqueKeyMap) + createSecretVal(volMap, hdfsArtRepo.KrbCCacheSecret, uniqueKeyMap) + } else if artRepo := tmpl.ArchiveLocation.Artifactory; artRepo != nil { + createSecretVal(volMap, artRepo.UsernameSecret, uniqueKeyMap) + createSecretVal(volMap, artRepo.PasswordSecret, uniqueKeyMap) + } else if gitRepo := tmpl.ArchiveLocation.Git; gitRepo != nil { + createSecretVal(volMap, gitRepo.UsernameSecret, uniqueKeyMap) + createSecretVal(volMap, gitRepo.PasswordSecret, uniqueKeyMap) + createSecretVal(volMap, gitRepo.SSHPrivateKeySecret, uniqueKeyMap) + } else if gcsRepo := tmpl.ArchiveLocation.GCS; gcsRepo != nil { + createSecretVal(volMap, &gcsRepo.CredentialsSecret, uniqueKeyMap) + } +} + +func createSecretVolume(volMap map[string]apiv1.Volume, art wfv1.Artifact, keyMap map[string]bool) { + if art.S3 != nil { + createSecretVal(volMap, &art.S3.AccessKeySecret, keyMap) + createSecretVal(volMap, &art.S3.SecretKeySecret, keyMap) + } else if art.Git != nil { + createSecretVal(volMap, art.Git.UsernameSecret, keyMap) + createSecretVal(volMap, art.Git.PasswordSecret, keyMap) + createSecretVal(volMap, art.Git.SSHPrivateKeySecret, keyMap) + } else if art.Artifactory != nil { + createSecretVal(volMap, art.Artifactory.UsernameSecret, keyMap) + createSecretVal(volMap, art.Artifactory.PasswordSecret, keyMap) + } else if art.HDFS != nil { + createSecretVal(volMap, art.HDFS.KrbCCacheSecret, keyMap) + createSecretVal(volMap, art.HDFS.KrbKeytabSecret, keyMap) + } else if art.GCS != nil { + createSecretVal(volMap, &art.GCS.CredentialsSecret, keyMap) + } +} + +func createSecretVal(volMap map[string]apiv1.Volume, secret *apiv1.SecretKeySelector, keyMap map[string]bool) { + if secret == nil { + return + } + if vol, ok := volMap[secret.Name]; ok { + key := apiv1.KeyToPath{ + Key: secret.Key, + Path: secret.Key, + } + if val, _ := keyMap[secret.Name+"-"+secret.Key]; !val { + keyMap[secret.Name+"-"+secret.Key] = true + vol.Secret.Items = append(vol.Secret.Items, key) + } + } else { + volume := apiv1.Volume{ + Name: secret.Name, + VolumeSource: apiv1.VolumeSource{ + Secret: &apiv1.SecretVolumeSource{ + SecretName: secret.Name, + Items: []apiv1.KeyToPath{ + { + Key: secret.Key, + Path: secret.Key, + }, + }, + }, + }, + } + keyMap[secret.Name+"-"+secret.Key] = true + volMap[secret.Name] = volume + } +} + +// findMainContainer finds main container +func findMainContainer(pod *apiv1.Pod) *apiv1.Container { + var mainCtr *apiv1.Container + for _, ctr := range pod.Spec.Containers { + if ctr.Name != common.MainContainerName { + continue + } + mainCtr = &ctr + break + } + return mainCtr +} + +// mirrorVolumeMounts mirrors volumeMounts of source container to target container +func mirrorVolumeMounts(sourceContainer, targetContainer *apiv1.Container) { + for _, volMnt := range sourceContainer.VolumeMounts { + if targetContainer.VolumeMounts == nil { + targetContainer.VolumeMounts = make([]apiv1.VolumeMount, 0) + } + log.Debugf("Adding volume mount %v to container %v", volMnt.Name, targetContainer.Name) + targetContainer.VolumeMounts = append(targetContainer.VolumeMounts, volMnt) + + } +} diff --git a/workflow/controller/workflowpod_test.go b/workflow/controller/workflowpod_test.go index 5039994e0c9d..32cabe9a9c71 100644 --- a/workflow/controller/workflowpod_test.go +++ b/workflow/controller/workflowpod_test.go @@ -1,9 +1,12 @@ package controller import ( + "encoding/json" + "fmt" "testing" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/common" "github.com/ghodss/yaml" "github.com/stretchr/testify/assert" apiv1 "k8s.io/api/core/v1" @@ -180,7 +183,16 @@ func TestWorkflowControllerArchiveConfig(t *testing.T) { // TestWorkflowControllerArchiveConfigUnresolvable verifies workflow fails when archive location has // unresolvable variables func TestWorkflowControllerArchiveConfigUnresolvable(t *testing.T) { - woc := newWoc() + wf := unmarshalWF(helloWorldWf) + wf.Spec.Templates[0].Outputs = wfv1.Outputs{ + Artifacts: []wfv1.Artifact{ + { + Name: "foo", + Path: "/tmp/file", + }, + }, + } + woc := newWoc(*wf) woc.controller.Config.ArtifactRepository.S3 = &S3ArtifactRepository{ S3Bucket: wfv1.S3Bucket{ Bucket: "foo", @@ -192,3 +204,327 @@ func TestWorkflowControllerArchiveConfigUnresolvable(t *testing.T) { _, err := woc.controller.kubeclientset.CoreV1().Pods("").Get(podName, metav1.GetOptions{}) assert.Error(t, err) } + +// TestConditionalNoAddArchiveLocation verifies we do not add archive location if it is not needed +func TestConditionalNoAddArchiveLocation(t *testing.T) { + woc := newWoc() + woc.controller.Config.ArtifactRepository.S3 = &S3ArtifactRepository{ + S3Bucket: wfv1.S3Bucket{ + Bucket: "foo", + }, + KeyFormat: "path/in/bucket", + } + woc.operate() + podName := getPodName(woc.wf) + pod, err := woc.controller.kubeclientset.CoreV1().Pods("").Get(podName, metav1.GetOptions{}) + assert.NoError(t, err) + var tmpl wfv1.Template + err = json.Unmarshal([]byte(pod.Annotations[common.AnnotationKeyTemplate]), &tmpl) + assert.NoError(t, err) + assert.Nil(t, tmpl.ArchiveLocation) +} + +// TestConditionalNoAddArchiveLocation verifies we add archive location when it is needed +func TestConditionalArchiveLocation(t *testing.T) { + wf := unmarshalWF(helloWorldWf) + wf.Spec.Templates[0].Outputs = wfv1.Outputs{ + Artifacts: []wfv1.Artifact{ + { + Name: "foo", + Path: "/tmp/file", + }, + }, + } + woc := newWoc() + woc.controller.Config.ArtifactRepository.S3 = &S3ArtifactRepository{ + S3Bucket: wfv1.S3Bucket{ + Bucket: "foo", + }, + KeyFormat: "path/in/bucket", + } + woc.operate() + podName := getPodName(woc.wf) + pod, err := woc.controller.kubeclientset.CoreV1().Pods("").Get(podName, metav1.GetOptions{}) + assert.NoError(t, err) + var tmpl wfv1.Template + err = json.Unmarshal([]byte(pod.Annotations[common.AnnotationKeyTemplate]), &tmpl) + assert.NoError(t, err) + assert.Nil(t, tmpl.ArchiveLocation) +} + +// TestVolumeAndVolumeMounts verifies the ability to carry forward volumes and volumeMounts from workflow.spec +func TestVolumeAndVolumeMounts(t *testing.T) { + volumes := []apiv1.Volume{ + { + Name: "volume-name", + VolumeSource: apiv1.VolumeSource{ + EmptyDir: &apiv1.EmptyDirVolumeSource{}, + }, + }, + } + volumeMounts := []apiv1.VolumeMount{ + { + Name: "volume-name", + MountPath: "/test", + }, + } + + // For Docker executor + { + woc := newWoc() + woc.volumes = volumes + woc.wf.Spec.Templates[0].Container.VolumeMounts = volumeMounts + woc.controller.Config.ContainerRuntimeExecutor = common.ContainerRuntimeExecutorDocker + + woc.executeContainer(woc.wf.Spec.Entrypoint, &woc.wf.Spec.Templates[0], "") + podName := getPodName(woc.wf) + pod, err := woc.controller.kubeclientset.CoreV1().Pods("").Get(podName, metav1.GetOptions{}) + assert.Nil(t, err) + assert.Equal(t, 3, len(pod.Spec.Volumes)) + assert.Equal(t, "podmetadata", pod.Spec.Volumes[0].Name) + assert.Equal(t, "docker-sock", pod.Spec.Volumes[1].Name) + assert.Equal(t, "volume-name", pod.Spec.Volumes[2].Name) + assert.Equal(t, 1, len(pod.Spec.Containers[1].VolumeMounts)) + assert.Equal(t, "volume-name", pod.Spec.Containers[1].VolumeMounts[0].Name) + } + + // For Kubelet executor + { + woc := newWoc() + woc.volumes = volumes + woc.wf.Spec.Templates[0].Container.VolumeMounts = volumeMounts + woc.controller.Config.ContainerRuntimeExecutor = common.ContainerRuntimeExecutorKubelet + + woc.executeContainer(woc.wf.Spec.Entrypoint, &woc.wf.Spec.Templates[0], "") + podName := getPodName(woc.wf) + pod, err := woc.controller.kubeclientset.CoreV1().Pods("").Get(podName, metav1.GetOptions{}) + assert.Nil(t, err) + assert.Equal(t, 2, len(pod.Spec.Volumes)) + assert.Equal(t, "podmetadata", pod.Spec.Volumes[0].Name) + assert.Equal(t, "volume-name", pod.Spec.Volumes[1].Name) + assert.Equal(t, 1, len(pod.Spec.Containers[1].VolumeMounts)) + assert.Equal(t, "volume-name", pod.Spec.Containers[1].VolumeMounts[0].Name) + } + + // For K8sAPI executor + { + woc := newWoc() + woc.volumes = volumes + woc.wf.Spec.Templates[0].Container.VolumeMounts = volumeMounts + woc.controller.Config.ContainerRuntimeExecutor = common.ContainerRuntimeExecutorK8sAPI + + woc.executeContainer(woc.wf.Spec.Entrypoint, &woc.wf.Spec.Templates[0], "") + podName := getPodName(woc.wf) + pod, err := woc.controller.kubeclientset.CoreV1().Pods("").Get(podName, metav1.GetOptions{}) + assert.Nil(t, err) + assert.Equal(t, 2, len(pod.Spec.Volumes)) + assert.Equal(t, "podmetadata", pod.Spec.Volumes[0].Name) + assert.Equal(t, "volume-name", pod.Spec.Volumes[1].Name) + assert.Equal(t, 1, len(pod.Spec.Containers[1].VolumeMounts)) + assert.Equal(t, "volume-name", pod.Spec.Containers[1].VolumeMounts[0].Name) + } +} + +func TestVolumesPodSubstitution(t *testing.T) { + volumes := []apiv1.Volume{ + { + Name: "volume-name", + VolumeSource: apiv1.VolumeSource{ + PersistentVolumeClaim: &apiv1.PersistentVolumeClaimVolumeSource{ + ClaimName: "{{inputs.parameters.volume-name}}", + }, + }, + }, + } + volumeMounts := []apiv1.VolumeMount{ + { + Name: "volume-name", + MountPath: "/test", + }, + } + tmpStr := "test-name" + inputParameters := []wfv1.Parameter{ + { + Name: "volume-name", + Value: &tmpStr, + }, + } + + woc := newWoc() + woc.volumes = volumes + woc.wf.Spec.Templates[0].Container.VolumeMounts = volumeMounts + woc.wf.Spec.Templates[0].Inputs.Parameters = inputParameters + woc.controller.Config.ContainerRuntimeExecutor = common.ContainerRuntimeExecutorDocker + + woc.executeContainer(woc.wf.Spec.Entrypoint, &woc.wf.Spec.Templates[0], "") + podName := getPodName(woc.wf) + pod, err := woc.controller.kubeclientset.CoreV1().Pods("").Get(podName, metav1.GetOptions{}) + assert.Nil(t, err) + assert.Equal(t, 3, len(pod.Spec.Volumes)) + assert.Equal(t, "volume-name", pod.Spec.Volumes[2].Name) + assert.Equal(t, "test-name", pod.Spec.Volumes[2].PersistentVolumeClaim.ClaimName) + assert.Equal(t, 1, len(pod.Spec.Containers[1].VolumeMounts)) + assert.Equal(t, "volume-name", pod.Spec.Containers[1].VolumeMounts[0].Name) +} + +func TestOutOfCluster(t *testing.T) { + + verifyKubeConfigVolume := func(ctr apiv1.Container, volName, mountPath string) { + for _, vol := range ctr.VolumeMounts { + if vol.Name == volName && vol.MountPath == mountPath { + for _, arg := range ctr.Args { + if arg == fmt.Sprintf("--kubeconfig=%s", mountPath) { + return + } + } + } + } + t.Fatalf("%v does not have kubeconfig mounted properly (name: %s, mountPath: %s)", ctr, volName, mountPath) + } + + // default mount path & volume name + { + woc := newWoc() + woc.controller.Config.KubeConfig = &KubeConfig{ + SecretName: "foo", + SecretKey: "bar", + } + + woc.executeContainer(woc.wf.Spec.Entrypoint, &woc.wf.Spec.Templates[0], "") + podName := getPodName(woc.wf) + pod, err := woc.controller.kubeclientset.CoreV1().Pods("").Get(podName, metav1.GetOptions{}) + + assert.Nil(t, err) + assert.Equal(t, "kubeconfig", pod.Spec.Volumes[1].Name) + assert.Equal(t, "foo", pod.Spec.Volumes[1].VolumeSource.Secret.SecretName) + + waitCtr := pod.Spec.Containers[0] + verifyKubeConfigVolume(waitCtr, "kubeconfig", "/kube/config") + } + + // custom mount path & volume name, in case name collision + { + woc := newWoc() + woc.controller.Config.KubeConfig = &KubeConfig{ + SecretName: "foo", + SecretKey: "bar", + MountPath: "/some/path/config", + VolumeName: "kube-config-secret", + } + + woc.executeContainer(woc.wf.Spec.Entrypoint, &woc.wf.Spec.Templates[0], "") + podName := getPodName(woc.wf) + pod, err := woc.controller.kubeclientset.CoreV1().Pods("").Get(podName, metav1.GetOptions{}) + + assert.Nil(t, err) + assert.Equal(t, "kube-config-secret", pod.Spec.Volumes[1].Name) + assert.Equal(t, "foo", pod.Spec.Volumes[1].VolumeSource.Secret.SecretName) + + // kubeconfig volume is the last one + waitCtr := pod.Spec.Containers[0] + verifyKubeConfigVolume(waitCtr, "kube-config-secret", "/some/path/config") + } +} + +// TestPriority verifies the ability to carry forward priorityClassName and priority. +func TestPriority(t *testing.T) { + priority := int32(15) + woc := newWoc() + woc.wf.Spec.Templates[0].PriorityClassName = "foo" + woc.wf.Spec.Templates[0].Priority = &priority + woc.executeContainer(woc.wf.Spec.Entrypoint, &woc.wf.Spec.Templates[0], "") + podName := getPodName(woc.wf) + pod, err := woc.controller.kubeclientset.CoreV1().Pods("").Get(podName, metav1.GetOptions{}) + assert.Nil(t, err) + assert.Equal(t, pod.Spec.PriorityClassName, "foo") + assert.Equal(t, pod.Spec.Priority, &priority) +} + +// TestSchedulerName verifies the ability to carry forward schedulerName. +func TestSchedulerName(t *testing.T) { + woc := newWoc() + woc.wf.Spec.Templates[0].SchedulerName = "foo" + woc.executeContainer(woc.wf.Spec.Entrypoint, &woc.wf.Spec.Templates[0], "") + podName := getPodName(woc.wf) + pod, err := woc.controller.kubeclientset.CoreV1().Pods("").Get(podName, metav1.GetOptions{}) + assert.Nil(t, err) + assert.Equal(t, pod.Spec.SchedulerName, "foo") +} + +// TestInitContainers verifies the ability to set up initContainers +func TestInitContainers(t *testing.T) { + volumes := []apiv1.Volume{ + { + Name: "volume-name", + VolumeSource: apiv1.VolumeSource{ + EmptyDir: &apiv1.EmptyDirVolumeSource{}, + }, + }, + } + volumeMounts := []apiv1.VolumeMount{ + { + Name: "volume-name", + MountPath: "/test", + }, + } + mirrorVolumeMounts := true + + woc := newWoc() + woc.volumes = volumes + woc.wf.Spec.Templates[0].Container.VolumeMounts = volumeMounts + woc.wf.Spec.Templates[0].InitContainers = []wfv1.UserContainer{ + { + MirrorVolumeMounts: &mirrorVolumeMounts, + Container: apiv1.Container{ + Name: "init-foo", + }, + }, + } + + woc.executeContainer(woc.wf.Spec.Entrypoint, &woc.wf.Spec.Templates[0], "") + podName := getPodName(woc.wf) + pod, err := woc.controller.kubeclientset.CoreV1().Pods("").Get(podName, metav1.GetOptions{}) + assert.Nil(t, err) + assert.Equal(t, 1, len(pod.Spec.InitContainers)) + assert.Equal(t, "init-foo", pod.Spec.InitContainers[0].Name) +} + +// TestSidecars verifies the ability to set up sidecars +func TestSidecars(t *testing.T) { + volumes := []apiv1.Volume{ + { + Name: "volume-name", + VolumeSource: apiv1.VolumeSource{ + EmptyDir: &apiv1.EmptyDirVolumeSource{}, + }, + }, + } + volumeMounts := []apiv1.VolumeMount{ + { + Name: "volume-name", + MountPath: "/test", + }, + } + mirrorVolumeMounts := true + + woc := newWoc() + woc.volumes = volumes + woc.wf.Spec.Templates[0].Container.VolumeMounts = volumeMounts + woc.wf.Spec.Templates[0].Sidecars = []wfv1.UserContainer{ + { + MirrorVolumeMounts: &mirrorVolumeMounts, + Container: apiv1.Container{ + Name: "side-foo", + }, + }, + } + + woc.executeContainer(woc.wf.Spec.Entrypoint, &woc.wf.Spec.Templates[0], "") + podName := getPodName(woc.wf) + pod, err := woc.controller.kubeclientset.CoreV1().Pods("").Get(podName, metav1.GetOptions{}) + assert.Nil(t, err) + assert.Equal(t, 3, len(pod.Spec.Containers)) + assert.Equal(t, "wait", pod.Spec.Containers[0].Name) + assert.Equal(t, "main", pod.Spec.Containers[1].Name) + assert.Equal(t, "side-foo", pod.Spec.Containers[2].Name) +} diff --git a/workflow/executor/common/common.go b/workflow/executor/common/common.go index e5b94cc38f4b..dd218c447305 100644 --- a/workflow/executor/common/common.go +++ b/workflow/executor/common/common.go @@ -19,7 +19,7 @@ const ( // killGracePeriod is the time in seconds after sending SIGTERM before // forcefully killing the sidecar with SIGKILL (value matches k8s) -const killGracePeriod = 10 +const KillGracePeriod = 10 // GetContainerID returns container ID of a ContainerStatus resource func GetContainerID(container *v1.ContainerStatus) string { @@ -32,9 +32,9 @@ func GetContainerID(container *v1.ContainerStatus) string { // KubernetesClientInterface is the interface to implement getContainerStatus method type KubernetesClientInterface interface { - getContainerStatus(containerID string) (*v1.Pod, *v1.ContainerStatus, error) - killContainer(pod *v1.Pod, container *v1.ContainerStatus, sig syscall.Signal) error - createArchive(containerID, sourcePath string) (*bytes.Buffer, error) + GetContainerStatus(containerID string) (*v1.Pod, *v1.ContainerStatus, error) + KillContainer(pod *v1.Pod, container *v1.ContainerStatus, sig syscall.Signal) error + CreateArchive(containerID, sourcePath string) (*bytes.Buffer, error) } // WaitForTermination of the given containerID, set the timeout to 0 to discard it @@ -52,7 +52,7 @@ func WaitForTermination(c KubernetesClientInterface, containerID string, timeout for { select { case <-ticker.C: - _, containerStatus, err := c.getContainerStatus(containerID) + _, containerStatus, err := c.GetContainerStatus(containerID) if err != nil { return err } @@ -70,7 +70,7 @@ func WaitForTermination(c KubernetesClientInterface, containerID string, timeout // TerminatePodWithContainerID invoke the given SIG against the PID1 of the container. // No-op if the container is on the hostPID func TerminatePodWithContainerID(c KubernetesClientInterface, containerID string, sig syscall.Signal) error { - pod, container, err := c.getContainerStatus(containerID) + pod, container, err := c.GetContainerStatus(containerID) if err != nil { return err } @@ -84,7 +84,7 @@ func TerminatePodWithContainerID(c KubernetesClientInterface, containerID string if pod.Spec.RestartPolicy != "Never" { return fmt.Errorf("cannot terminate pod with a %q restart policy", pod.Spec.RestartPolicy) } - return c.killContainer(pod, container, sig) + return c.KillContainer(pod, container, sig) } // KillGracefully kills a container gracefully. @@ -94,7 +94,7 @@ func KillGracefully(c KubernetesClientInterface, containerID string) error { if err != nil { return err } - err = WaitForTermination(c, containerID, time.Second*killGracePeriod) + err = WaitForTermination(c, containerID, time.Second*KillGracePeriod) if err == nil { log.Infof("ContainerID %q successfully killed", containerID) return nil @@ -104,7 +104,7 @@ func KillGracefully(c KubernetesClientInterface, containerID string) error { if err != nil { return err } - err = WaitForTermination(c, containerID, time.Second*killGracePeriod) + err = WaitForTermination(c, containerID, time.Second*KillGracePeriod) if err != nil { return err } @@ -115,7 +115,7 @@ func KillGracefully(c KubernetesClientInterface, containerID string) error { // CopyArchive downloads files and directories as a tarball and saves it to a specified path. func CopyArchive(c KubernetesClientInterface, containerID, sourcePath, destPath string) error { log.Infof("Archiving %s:%s to %s", containerID, sourcePath, destPath) - b, err := c.createArchive(containerID, sourcePath) + b, err := c.CreateArchive(containerID, sourcePath) if err != nil { return err } diff --git a/workflow/executor/docker/docker.go b/workflow/executor/docker/docker.go index 64ed6c87f734..bee8550bdf24 100644 --- a/workflow/executor/docker/docker.go +++ b/workflow/executor/docker/docker.go @@ -1,22 +1,22 @@ package docker import ( + "archive/tar" + "compress/gzip" "fmt" + "io" "os" "os/exec" - "strings" "time" - "github.com/argoproj/argo/util" - - "github.com/argoproj/argo/errors" - "github.com/argoproj/argo/workflow/common" log "github.com/sirupsen/logrus" -) -// killGracePeriod is the time in seconds after sending SIGTERM before -// forcefully killing the sidecar with SIGKILL (value matches k8s) -const killGracePeriod = 10 + "github.com/cyrusbiotechnology/argo/errors" + "github.com/cyrusbiotechnology/argo/util" + "github.com/cyrusbiotechnology/argo/util/file" + "github.com/cyrusbiotechnology/argo/workflow/common" + execcommon "github.com/cyrusbiotechnology/argo/workflow/executor/common" +) type DockerExecutor struct{} @@ -51,38 +51,48 @@ func (d *DockerExecutor) CopyFile(containerID string, sourcePath string, destPat if err != nil { return err } + copiedFile, err := os.Open(destPath) + if err != nil { + return err + } + defer util.Close(copiedFile) + gzipReader, err := gzip.NewReader(copiedFile) + if err != nil { + return err + } + if !file.ExistsInTar(sourcePath, tar.NewReader(gzipReader)) { + errMsg := fmt.Sprintf("path %s does not exist (or %s is empty) in archive %s", sourcePath, sourcePath, destPath) + log.Warn(errMsg) + return errors.Errorf(errors.CodeNotFound, errMsg) + } log.Infof("Archiving completed") return nil } -// GetOutput returns the entirety of the container output as a string -// Used to capturing script results as an output parameter -func (d *DockerExecutor) GetOutput(containerID string) (string, error) { +func (d *DockerExecutor) GetOutputStream(containerID string, combinedOutput bool) (io.ReadCloser, error) { cmd := exec.Command("docker", "logs", containerID) log.Info(cmd.Args) - outBytes, _ := cmd.Output() - return strings.TrimSpace(string(outBytes)), nil -} - -// Wait for the container to complete -func (d *DockerExecutor) Wait(containerID string) error { - return common.RunCommand("docker", "wait", containerID) -} - -// Logs captures the logs of a container to a file -func (d *DockerExecutor) Logs(containerID string, path string) error { - cmd := exec.Command("docker", "logs", containerID) - outfile, err := os.Create(path) + if combinedOutput { + cmd.Stderr = cmd.Stdout + } + reader, err := cmd.StdoutPipe() if err != nil { - return errors.InternalWrapError(err) + return nil, errors.InternalWrapError(err) } - defer util.Close(outfile) - cmd.Stdout = outfile err = cmd.Start() if err != nil { - return errors.InternalWrapError(err) + return nil, errors.InternalWrapError(err) } - return cmd.Wait() + return reader, nil +} + +func (d *DockerExecutor) WaitInit() error { + return nil +} + +// Wait for the container to complete +func (d *DockerExecutor) Wait(containerID string) error { + return common.RunCommand("docker", "wait", containerID) } // killContainers kills a list of containerIDs first with a SIGTERM then with a SIGKILL after a grace period @@ -101,8 +111,8 @@ func (d *DockerExecutor) Kill(containerIDs []string) error { // waitCmd.Wait() might return error "signal: killed" when we SIGKILL the process // We ignore errors in this case //ignoreWaitError := false - timer := time.AfterFunc(killGracePeriod*time.Second, func() { - log.Infof("Timed out (%ds) for containers to terminate gracefully. Killing forcefully", killGracePeriod) + timer := time.AfterFunc(execcommon.KillGracePeriod*time.Second, func() { + log.Infof("Timed out (%ds) for containers to terminate gracefully. Killing forcefully", execcommon.KillGracePeriod) forceKillArgs := append([]string{"kill", "--signal", "KILL"}, containerIDs...) forceKillCmd := exec.Command("docker", forceKillArgs...) log.Info(forceKillCmd.Args) diff --git a/workflow/executor/executor.go b/workflow/executor/executor.go index 017d89dfad09..9c13caf139ec 100644 --- a/workflow/executor/executor.go +++ b/workflow/executor/executor.go @@ -3,6 +3,7 @@ package executor import ( "bufio" "bytes" + "compress/gzip" "context" "encoding/json" "fmt" @@ -14,32 +15,45 @@ import ( "os/signal" "path" "path/filepath" + "regexp" "runtime/debug" "strings" "syscall" "time" - "github.com/argoproj/argo/errors" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/util/retry" - artifact "github.com/argoproj/argo/workflow/artifacts" - "github.com/argoproj/argo/workflow/artifacts/artifactory" - "github.com/argoproj/argo/workflow/artifacts/git" - "github.com/argoproj/argo/workflow/artifacts/http" - "github.com/argoproj/argo/workflow/artifacts/raw" - "github.com/argoproj/argo/workflow/artifacts/s3" - "github.com/argoproj/argo/workflow/common" argofile "github.com/argoproj/pkg/file" - "github.com/fsnotify/fsnotify" log "github.com/sirupsen/logrus" apiv1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/fields" "k8s.io/apimachinery/pkg/util/wait" + "k8s.io/apimachinery/pkg/watch" "k8s.io/client-go/kubernetes" + + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/util/archive" + "github.com/cyrusbiotechnology/argo/util/retry" + artifact "github.com/cyrusbiotechnology/argo/workflow/artifacts" + "github.com/cyrusbiotechnology/argo/workflow/artifacts/artifactory" + "github.com/cyrusbiotechnology/argo/workflow/artifacts/gcs" + "github.com/cyrusbiotechnology/argo/workflow/artifacts/git" + "github.com/cyrusbiotechnology/argo/workflow/artifacts/hdfs" + "github.com/cyrusbiotechnology/argo/workflow/artifacts/http" + "github.com/cyrusbiotechnology/argo/workflow/artifacts/raw" + "github.com/cyrusbiotechnology/argo/workflow/artifacts/s3" + "github.com/cyrusbiotechnology/argo/workflow/common" +) + +const ( + // This directory temporarily stores the tarballs of the artifacts before uploading + tempOutArtDir = "/argo/outputs/artifacts" ) // WorkflowExecutor is program which runs as the init/wait container type WorkflowExecutor struct { + common.ResourceInterface + PodName string Template wfv1.Template ClientSet kubernetes.Interface @@ -50,8 +64,10 @@ type WorkflowExecutor struct { // memoized container ID to prevent multiple lookups mainContainerID string + // memoized configmaps + memoizedConfigMaps map[string]string // memoized secrets - memoizedSecrets map[string]string + memoizedSecrets map[string][]byte // list of errors that occurred during execution. // the first of these is used as the overall message of the node errors []error @@ -65,29 +81,32 @@ type ContainerRuntimeExecutor interface { // CopyFile copies a source file in a container to a local path CopyFile(containerID string, sourcePath string, destPath string) error - // GetOutput returns the entirety of the container output as a string - // Used to capturing script results as an output parameter - GetOutput(containerID string) (string, error) + // GetOutputStream returns the entirety of the container output as a io.Reader + // Used to capture script results as an output parameter, and to archive container logs + GetOutputStream(containerID string, combinedOutput bool) (io.ReadCloser, error) - // Wait for the container to complete - Wait(containerID string) error + // WaitInit is called before Wait() to signal the executor about an impending Wait call. + // For most executors this is a noop, and is only used by the the PNS executor + WaitInit() error - // Copy logs to a given path - Logs(containerID string, path string) error + // Wait waits for the container to complete + Wait(containerID string) error // Kill a list of containerIDs first with a SIGTERM then with a SIGKILL after a grace period Kill(containerIDs []string) error } // NewExecutor instantiates a new workflow executor -func NewExecutor(clientset kubernetes.Interface, podName, namespace, podAnnotationsPath string, cre ContainerRuntimeExecutor) WorkflowExecutor { +func NewExecutor(clientset kubernetes.Interface, podName, namespace, podAnnotationsPath string, cre ContainerRuntimeExecutor, template wfv1.Template) WorkflowExecutor { return WorkflowExecutor{ PodName: podName, ClientSet: clientset, Namespace: namespace, PodAnnotationsPath: podAnnotationsPath, RuntimeExecutor: cre, - memoizedSecrets: map[string]string{}, + Template: template, + memoizedConfigMaps: map[string]string{}, + memoizedSecrets: map[string][]byte{}, errors: []error{}, } } @@ -104,12 +123,22 @@ func (we *WorkflowExecutor) HandleError() { } } -// LoadArtifacts loads aftifacts from location to a container path +// LoadArtifacts loads artifacts from location to a container path func (we *WorkflowExecutor) LoadArtifacts() error { log.Infof("Start loading input artifacts...") for _, art := range we.Template.Inputs.Artifacts { + log.Infof("Downloading artifact: %s", art.Name) + + if !art.HasLocation() { + if art.Optional { + log.Warnf("Ignoring optional artifact '%s' which was not supplied", art.Name) + continue + } else { + return errors.New("required artifact %s not supplied", art.Name) + } + } artDriver, err := we.InitDriver(art) if err != nil { return err @@ -129,7 +158,7 @@ func (we *WorkflowExecutor) LoadArtifacts() error { // as opposed to the `input-artifacts` volume that is an implementation detail // unbeknownst to the user. log.Infof("Specified artifact path %s overlaps with volume mount at %s. Extracting to volume mount", art.Path, mnt.MountPath) - artPath = path.Join(common.InitContainerMainFilesystemDir, art.Path) + artPath = path.Join(common.ExecutorMainFilesystemDir, art.Path) } // The artifact is downloaded to a temporary location, after which we determine if @@ -196,15 +225,13 @@ func (we *WorkflowExecutor) SaveArtifacts() error { return err } - // This directory temporarily stores the tarballs of the artifacts before uploading - tempOutArtDir := "/argo/outputs/artifacts" err = os.MkdirAll(tempOutArtDir, os.ModePerm) if err != nil { return errors.InternalWrapError(err) } for i, art := range we.Template.Outputs.Artifacts { - err := we.saveArtifact(tempOutArtDir, mainCtrID, &art) + err := we.saveArtifact(mainCtrID, &art) if err != nil { return err } @@ -213,27 +240,19 @@ func (we *WorkflowExecutor) SaveArtifacts() error { return nil } -func (we *WorkflowExecutor) saveArtifact(tempOutArtDir string, mainCtrID string, art *wfv1.Artifact) error { - log.Infof("Saving artifact: %s", art.Name) +func (we *WorkflowExecutor) saveArtifact(mainCtrID string, art *wfv1.Artifact) error { // Determine the file path of where to find the artifact if art.Path == "" { return errors.InternalErrorf("Artifact %s did not specify a path", art.Name) } - - // fileName is incorporated into the final path when uploading it to the artifact repo - fileName := fmt.Sprintf("%s.tgz", art.Name) - // localArtPath is the final staging location of the file (or directory) which we will pass - // to the SaveArtifacts call - localArtPath := path.Join(tempOutArtDir, fileName) - err := we.RuntimeExecutor.CopyFile(mainCtrID, art.Path, localArtPath) - if err != nil { - return err - } - fileName, localArtPath, err = stageArchiveFile(fileName, localArtPath, art) + fileName, localArtPath, err := we.stageArchiveFile(mainCtrID, art) if err != nil { + if art.Optional && errors.IsCode(errors.CodeNotFound, err) { + log.Warnf("Ignoring optional artifact '%s' which does not exist in path '%s': %v", art.Name, art.Path, err) + return nil + } return err } - if !art.HasLocation() { // If user did not explicitly set an artifact destination location in the template, // use the default archive location (appended with the filename). @@ -253,6 +272,14 @@ func (we *WorkflowExecutor) saveArtifact(tempOutArtDir string, mainCtrID string, } artifactoryURL.Path = path.Join(artifactoryURL.Path, fileName) art.Artifactory.URL = artifactoryURL.String() + } else if we.Template.ArchiveLocation.HDFS != nil { + shallowCopy := *we.Template.ArchiveLocation.HDFS + art.HDFS = &shallowCopy + art.HDFS.Path = path.Join(art.HDFS.Path, fileName) + } else if we.Template.ArchiveLocation.GCS != nil { + shallowCopy := *we.Template.ArchiveLocation.GCS + art.GCS = &shallowCopy + art.GCS.Key = path.Join(art.GCS.Key, fileName) } else { return errors.Errorf(errors.CodeBadRequest, "Unable to determine path to store %s. Archive location provided no information", art.Name) } @@ -276,7 +303,13 @@ func (we *WorkflowExecutor) saveArtifact(tempOutArtDir string, mainCtrID string, return nil } -func stageArchiveFile(fileName, localArtPath string, art *wfv1.Artifact) (string, string, error) { +// stageArchiveFile stages a path in a container for archiving from the wait sidecar. +// Returns a filename and a local path for the upload. +// The filename is incorporated into the final path when uploading it to the artifact repo. +// The local path is the final staging location of the file (or directory) which we will pass +// to the SaveArtifacts call and may be a directory or file. +func (we *WorkflowExecutor) stageArchiveFile(mainCtrID string, art *wfv1.Artifact) (string, string, error) { + log.Infof("Staging artifact: %s", art.Name) strategy := art.Archive if strategy == nil { // If no strategy is specified, default to the tar strategy @@ -284,44 +317,83 @@ func stageArchiveFile(fileName, localArtPath string, art *wfv1.Artifact) (string Tar: &wfv1.TarStrategy{}, } } - tempOutArtDir := filepath.Dir(localArtPath) - if strategy.None != nil { - log.Info("Disabling archive before upload") - unarchivedArtPath := path.Join(tempOutArtDir, art.Name) - err := untar(localArtPath, unarchivedArtPath) - if err != nil { - return "", "", err + + if !we.isBaseImagePath(art.Path) { + // If we get here, we are uploading an artifact from a mirrored volume mount which the wait + // sidecar has direct access to. We can upload directly from the shared volume mount, + // instead of copying it from the container. + mountedArtPath := filepath.Join(common.ExecutorMainFilesystemDir, art.Path) + log.Infof("Staging %s from mirrored volume mount %s", art.Path, mountedArtPath) + if strategy.None != nil { + fileName := filepath.Base(art.Path) + log.Infof("No compression strategy needed. Staging skipped") + return fileName, mountedArtPath, nil } - // Delete the tarball - err = os.Remove(localArtPath) + fileName := fmt.Sprintf("%s.tgz", art.Name) + localArtPath := filepath.Join(tempOutArtDir, fileName) + f, err := os.Create(localArtPath) if err != nil { return "", "", errors.InternalWrapError(err) } - isDir, err := argofile.IsDirectory(unarchivedArtPath) + w := bufio.NewWriter(f) + err = archive.TarGzToWriter(mountedArtPath, w) if err != nil { - return "", "", errors.InternalWrapError(err) + return "", "", err } - fileName = filepath.Base(art.Path) - if isDir { - localArtPath = unarchivedArtPath - } else { - // If we are uploading a single file, we need to preserve original filename so that - // 1. minio client can infer its mime-type, based on file extension - // 2. the original filename is incorporated into the final path - localArtPath = path.Join(tempOutArtDir, fileName) - err = os.Rename(unarchivedArtPath, localArtPath) - if err != nil { - return "", "", errors.InternalWrapError(err) - } + log.Infof("Successfully staged %s from mirrored volume mount %s", art.Path, mountedArtPath) + return fileName, localArtPath, nil + } + + fileName := fmt.Sprintf("%s.tgz", art.Name) + localArtPath := filepath.Join(tempOutArtDir, fileName) + log.Infof("Copying %s from container base image layer to %s", art.Path, localArtPath) + + err := we.RuntimeExecutor.CopyFile(mainCtrID, art.Path, localArtPath) + if err != nil { + return "", "", err + } + if strategy.Tar != nil { + // NOTE we already tar gzip the file in the executor. So this is a noop. + return fileName, localArtPath, nil + } + // localArtPath now points to a .tgz file, and the archive strategy is *not* tar. We need to untar it + log.Infof("Untaring %s archive before upload", localArtPath) + unarchivedArtPath := path.Join(filepath.Dir(localArtPath), art.Name) + err = untar(localArtPath, unarchivedArtPath) + if err != nil { + return "", "", err + } + // Delete the tarball + err = os.Remove(localArtPath) + if err != nil { + return "", "", errors.InternalWrapError(err) + } + isDir, err := argofile.IsDirectory(unarchivedArtPath) + if err != nil { + return "", "", errors.InternalWrapError(err) + } + fileName = filepath.Base(art.Path) + if isDir { + localArtPath = unarchivedArtPath + } else { + // If we are uploading a single file, we need to preserve original filename so that + // 1. minio client can infer its mime-type, based on file extension + // 2. the original filename is incorporated into the final path + localArtPath = path.Join(tempOutArtDir, fileName) + err = os.Rename(unarchivedArtPath, localArtPath) + if err != nil { + return "", "", errors.InternalWrapError(err) } - } else if strategy.Tar != nil { - // NOTE we already tar gzip the file in the executor. So this is a noop. In the future, if - // we were to support other compression formats (e.g. bzip2) or options, the logic would go - // here, and compression would be moved out of the executors. } + // In the future, if we were to support other compression formats (e.g. bzip2) or options + // the logic would go here, and compression would be moved out of the executors return fileName, localArtPath, nil } +func (we *WorkflowExecutor) isBaseImagePath(path string) bool { + return common.FindOverlappingVolume(&we.Template, path) == nil +} + // SaveParameters will save the content in the specified file path as output parameter value func (we *WorkflowExecutor) SaveParameters() error { if len(we.Template.Outputs.Parameters) == 0 { @@ -340,10 +412,24 @@ func (we *WorkflowExecutor) SaveParameters() error { if param.ValueFrom == nil || param.ValueFrom.Path == "" { continue } - output, err := we.RuntimeExecutor.GetFileContents(mainCtrID, param.ValueFrom.Path) - if err != nil { - return err + + var output string + if we.isBaseImagePath(param.ValueFrom.Path) { + log.Infof("Copying %s from base image layer", param.ValueFrom.Path) + output, err = we.RuntimeExecutor.GetFileContents(mainCtrID, param.ValueFrom.Path) + if err != nil { + return err + } + } else { + log.Infof("Copying %s from from volume mount", param.ValueFrom.Path) + mountedPath := filepath.Join(common.ExecutorMainFilesystemDir, param.ValueFrom.Path) + out, err := ioutil.ReadFile(mountedPath) + if err != nil { + return err + } + output = string(out) } + outputLen := len(output) // Trims off a single newline for user convenience if outputLen > 0 && output[outputLen-1] == '\n' { @@ -355,27 +441,38 @@ func (we *WorkflowExecutor) SaveParameters() error { return nil } +func (we *WorkflowExecutor) saveLogsToPath(logDir string, fileName string) (outputPath string, err error) { + mainCtrID, err := we.GetMainContainerID() + if err != nil { + return + } + err = os.MkdirAll(logDir, os.ModePerm) + if err != nil { + err = errors.InternalWrapError(err) + return + } + outputPath = path.Join(logDir, fileName) + err = we.saveLogToFile(mainCtrID, outputPath) + if err != nil { + return + } + return +} + // SaveLogs saves logs func (we *WorkflowExecutor) SaveLogs() (*wfv1.Artifact, error) { if we.Template.ArchiveLocation == nil || we.Template.ArchiveLocation.ArchiveLogs == nil || !*we.Template.ArchiveLocation.ArchiveLogs { return nil, nil } log.Infof("Saving logs") - mainCtrID, err := we.GetMainContainerID() - if err != nil { - return nil, err - } - tempLogsDir := "/argo/outputs/logs" - err = os.MkdirAll(tempLogsDir, os.ModePerm) - if err != nil { - return nil, errors.InternalWrapError(err) - } fileName := "main.log" - mainLog := path.Join(tempLogsDir, fileName) - err = we.RuntimeExecutor.Logs(mainCtrID, mainLog) + tempLogsDir := "/argo/outputs/logs" + + mainLog, err := we.saveLogsToPath(tempLogsDir, fileName) if err != nil { return nil, err } + art := wfv1.Artifact{ Name: "main-logs", ArtifactLocation: *we.Template.ArchiveLocation, @@ -393,6 +490,10 @@ func (we *WorkflowExecutor) SaveLogs() (*wfv1.Artifact, error) { } artifactoryURL.Path = path.Join(artifactoryURL.Path, fileName) art.Artifactory.URL = artifactoryURL.String() + } else if we.Template.ArchiveLocation.HDFS != nil { + shallowCopy := *we.Template.ArchiveLocation.HDFS + art.HDFS = &shallowCopy + art.HDFS.Path = path.Join(art.HDFS.Path, fileName) } else { return nil, errors.Errorf(errors.CodeBadRequest, "Unable to determine path to store %s. Archive location provided no information", art.Name) } @@ -408,6 +509,30 @@ func (we *WorkflowExecutor) SaveLogs() (*wfv1.Artifact, error) { return &art, nil } +// GetSecretFromVolMount will retrive the Secrets from VolumeMount +func (we *WorkflowExecutor) GetSecretFromVolMount(accessKeyName string, accessKey string) ([]byte, error) { + return ioutil.ReadFile(filepath.Join(common.SecretVolMountPath, accessKeyName, accessKey)) +} + +// saveLogToFile saves the entire log output of a container to a local file +func (we *WorkflowExecutor) saveLogToFile(mainCtrID, path string) error { + outFile, err := os.Create(path) + if err != nil { + return errors.InternalWrapError(err) + } + defer func() { _ = outFile.Close() }() + reader, err := we.RuntimeExecutor.GetOutputStream(mainCtrID, true) + if err != nil { + return err + } + defer func() { _ = reader.Close() }() + _, err = io.Copy(outFile, reader) + if err != nil { + return errors.InternalWrapError(err) + } + return nil +} + // InitDriver initializes an instance of an artifact driver func (we *WorkflowExecutor) InitDriver(art wfv1.Artifact) (artifact.ArtifactDriver, error) { if art.S3 != nil { @@ -415,15 +540,16 @@ func (we *WorkflowExecutor) InitDriver(art wfv1.Artifact) (artifact.ArtifactDriv var secretKey string if art.S3.AccessKeySecret.Name != "" { - var err error - accessKey, err = we.getSecrets(we.Namespace, art.S3.AccessKeySecret.Name, art.S3.AccessKeySecret.Key) + accessKeyBytes, err := we.GetSecretFromVolMount(art.S3.AccessKeySecret.Name, art.S3.AccessKeySecret.Key) if err != nil { return nil, err } - secretKey, err = we.getSecrets(we.Namespace, art.S3.SecretKeySecret.Name, art.S3.SecretKeySecret.Key) + accessKey = string(accessKeyBytes) + secretKeyBytes, err := we.GetSecretFromVolMount(art.S3.SecretKeySecret.Name, art.S3.SecretKeySecret.Key) if err != nil { return nil, err } + secretKey = string(secretKeyBytes) } driver := s3.S3ArtifactDriver{ @@ -435,54 +561,74 @@ func (we *WorkflowExecutor) InitDriver(art wfv1.Artifact) (artifact.ArtifactDriv } return &driver, nil } + if art.GCS != nil { + credsJSONData, err := we.GetSecretFromVolMount(art.GCS.CredentialsSecret.Name, art.GCS.CredentialsSecret.Key) + if err != nil { + return nil, err + } + driver := gcs.GCSArtifactDriver{ + CredsJSONData: credsJSONData, + } + return &driver, nil + } + if art.GCS != nil { + driver := gcs.GCSArtifactDriver{} + return &driver, nil + } if art.HTTP != nil { return &http.HTTPArtifactDriver{}, nil } if art.Git != nil { - gitDriver := git.GitArtifactDriver{} + gitDriver := git.GitArtifactDriver{ + InsecureIgnoreHostKey: art.Git.InsecureIgnoreHostKey, + } if art.Git.UsernameSecret != nil { - username, err := we.getSecrets(we.Namespace, art.Git.UsernameSecret.Name, art.Git.UsernameSecret.Key) + usernameBytes, err := we.GetSecretFromVolMount(art.Git.UsernameSecret.Name, art.Git.UsernameSecret.Key) if err != nil { return nil, err } - gitDriver.Username = username + gitDriver.Username = string(usernameBytes) } if art.Git.PasswordSecret != nil { - password, err := we.getSecrets(we.Namespace, art.Git.PasswordSecret.Name, art.Git.PasswordSecret.Key) + passwordBytes, err := we.GetSecretFromVolMount(art.Git.PasswordSecret.Name, art.Git.PasswordSecret.Key) if err != nil { return nil, err } - gitDriver.Password = password + gitDriver.Password = string(passwordBytes) } if art.Git.SSHPrivateKeySecret != nil { - sshPrivateKey, err := we.getSecrets(we.Namespace, art.Git.SSHPrivateKeySecret.Name, art.Git.SSHPrivateKeySecret.Key) + sshPrivateKeyBytes, err := we.GetSecretFromVolMount(art.Git.SSHPrivateKeySecret.Name, art.Git.SSHPrivateKeySecret.Key) if err != nil { return nil, err } - gitDriver.SSHPrivateKey = sshPrivateKey + gitDriver.SSHPrivateKey = string(sshPrivateKeyBytes) } return &gitDriver, nil } if art.Artifactory != nil { - username, err := we.getSecrets(we.Namespace, art.Artifactory.UsernameSecret.Name, art.Artifactory.UsernameSecret.Key) + usernameBytes, err := we.GetSecretFromVolMount(art.Artifactory.UsernameSecret.Name, art.Artifactory.UsernameSecret.Key) if err != nil { return nil, err } - password, err := we.getSecrets(we.Namespace, art.Artifactory.PasswordSecret.Name, art.Artifactory.PasswordSecret.Key) + passwordBytes, err := we.GetSecretFromVolMount(art.Artifactory.PasswordSecret.Name, art.Artifactory.PasswordSecret.Key) if err != nil { return nil, err } driver := artifactory.ArtifactoryArtifactDriver{ - Username: username, - Password: password, + Username: string(usernameBytes), + Password: string(passwordBytes), } return &driver, nil } + if art.HDFS != nil { + return hdfs.CreateDriver(we, art.HDFS) + } if art.Raw != nil { return &raw.RawArtifactDriver{}, nil } + return nil, errors.Errorf(errors.CodeBadRequest, "Unsupported artifact driver for %s", art.Name) } @@ -508,8 +654,48 @@ func (we *WorkflowExecutor) getPod() (*apiv1.Pod, error) { return pod, nil } -// getSecrets retrieves a secret value and memoizes the result -func (we *WorkflowExecutor) getSecrets(namespace, name, key string) (string, error) { +// GetNamespace returns the namespace +func (we *WorkflowExecutor) GetNamespace() string { + return we.Namespace +} + +// GetConfigMapKey retrieves a configmap value and memoizes the result +func (we *WorkflowExecutor) GetConfigMapKey(namespace, name, key string) (string, error) { + cachedKey := fmt.Sprintf("%s/%s/%s", namespace, name, key) + if val, ok := we.memoizedConfigMaps[cachedKey]; ok { + return val, nil + } + configmapsIf := we.ClientSet.CoreV1().ConfigMaps(namespace) + var configmap *apiv1.ConfigMap + var err error + _ = wait.ExponentialBackoff(retry.DefaultRetry, func() (bool, error) { + configmap, err = configmapsIf.Get(name, metav1.GetOptions{}) + if err != nil { + log.Warnf("Failed to get configmap '%s': %v", name, err) + if !retry.IsRetryableKubeAPIError(err) { + return false, err + } + return false, nil + } + return true, nil + }) + if err != nil { + return "", errors.InternalWrapError(err) + } + // memoize all keys in the configmap since it's highly likely we will need to get a + // subsequent key in the configmap (e.g. username + password) and we can save an API call + for k, v := range configmap.Data { + we.memoizedConfigMaps[fmt.Sprintf("%s/%s/%s", namespace, name, k)] = v + } + val, ok := we.memoizedConfigMaps[cachedKey] + if !ok { + return "", errors.Errorf(errors.CodeBadRequest, "configmap '%s' does not have the key '%s'", name, key) + } + return val, nil +} + +// GetSecrets retrieves a secret value and memoizes the result +func (we *WorkflowExecutor) GetSecrets(namespace, name, key string) ([]byte, error) { cachedKey := fmt.Sprintf("%s/%s/%s", namespace, name, key) if val, ok := we.memoizedSecrets[cachedKey]; ok { return val, nil @@ -529,16 +715,16 @@ func (we *WorkflowExecutor) getSecrets(namespace, name, key string) (string, err return true, nil }) if err != nil { - return "", errors.InternalWrapError(err) + return []byte{}, errors.InternalWrapError(err) } // memoize all keys in the secret since it's highly likely we will need to get a // subsequent key in the secret (e.g. username + password) and we can save an API call for k, v := range secret.Data { - we.memoizedSecrets[fmt.Sprintf("%s/%s/%s", namespace, name, k)] = string(v) + we.memoizedSecrets[fmt.Sprintf("%s/%s/%s", namespace, name, k)] = v } val, ok := we.memoizedSecrets[cachedKey] if !ok { - return "", errors.Errorf(errors.CodeBadRequest, "secret '%s' does not have the key '%s'", name, key) + return []byte{}, errors.Errorf(errors.CodeBadRequest, "secret '%s' does not have the key '%s'", name, key) } return val, nil } @@ -583,10 +769,21 @@ func (we *WorkflowExecutor) CaptureScriptResult() error { if err != nil { return err } - out, err := we.RuntimeExecutor.GetOutput(mainContainerID) + reader, err := we.RuntimeExecutor.GetOutputStream(mainContainerID, false) if err != nil { return err } + defer func() { _ = reader.Close() }() + bytes, err := ioutil.ReadAll(reader) + if err != nil { + return errors.InternalWrapError(err) + } + out := string(bytes) + // Trims off a single newline for user convenience + outputLen := len(out) + if outputLen > 0 && out[outputLen-1] == '\n' { + out = out[0 : outputLen-1] + } we.Template.Outputs.Result = &out return nil } @@ -611,6 +808,7 @@ func (we *WorkflowExecutor) AnnotateOutputs(logArt *wfv1.Artifact) error { // AddError adds an error to the list of encountered errors durign execution func (we *WorkflowExecutor) AddError(err error) { + log.Errorf("executor error: %+v", err) we.errors = append(we.errors, err) } @@ -619,6 +817,153 @@ func (we *WorkflowExecutor) AddAnnotation(key, value string) error { return common.AddPodAnnotation(we.ClientSet, we.PodName, we.Namespace, key, value) } +type ConditionType string + +const ( + ConditionTypeError ConditionType = "error" + ConditionTypeWarning ConditionType = "warning" +) + +func (we *WorkflowExecutor) EvaluateConditions(conditionMode ConditionType) error { + + var resultsLocation *[]wfv1.ExceptionCondition + var annotationKey string + + if conditionMode == ConditionTypeError { + resultsLocation = &we.Template.Errors + annotationKey = common.AnnotationKeyErrors + + } else if conditionMode == ConditionTypeWarning { + resultsLocation = &we.Template.Warnings + annotationKey = common.AnnotationKeyWarnings + } else { + return errors.InternalErrorf("The valid condition types are 'error' or 'warning', got %s instead", string(conditionMode)) + } + + results, err := we.evaluatePatternConditions(resultsLocation) + if err != nil { + return errors.InternalWrapError(err) + } + + if results != nil { + errorResultBytes, err := json.Marshal(results) + if err != nil { + return errors.InternalWrapError(err) + } + + return we.AddAnnotation(annotationKey, string(errorResultBytes)) + } + return nil +} + +func (we *WorkflowExecutor) fetchFileForErrorHandling(fileSource string) (logData []byte, err error) { + + var logPath string + + if fileSource[0] == '/' { + mainCtrID, err := we.GetMainContainerID() + if err != nil { + return nil, err + } + baseDir := "/argo/logs/" + uncompressedLogPath := baseDir + filepath.Base(fileSource) + + // RuntimeExecutor.CopyFile gzips the file + logPath = uncompressedLogPath + ".gz" + + if _, err := os.Stat(logPath); os.IsNotExist(err) { + err = os.MkdirAll(baseDir, os.ModePerm) + if err != nil { + err = errors.InternalWrapError(err) + return nil, err + } + err = we.RuntimeExecutor.CopyFile(mainCtrID, fileSource, logPath) + if err != nil { + return nil, err + } + } + + } else if fileSource == "stdout" { + + logPath = "/argo/logs/main.log" + if _, err := os.Stat(logPath); os.IsNotExist(err) { + _, err = we.saveLogsToPath("/argo/logs", "main.log") + if err != nil { + return nil, err + } + } + } else { + err = errors.InternalErrorf("fileSource must be an absolute path or 'stdout', got %s instead", fileSource) + return + + } + + logFile, err := os.Open(logPath) + if err != nil { + log.Errorf("Error attempting to open logfile %s", logPath) + return + } + defer logFile.Close() + + if strings.HasSuffix(logPath, ".gz") { + decompressed, err := gzip.NewReader(logFile) + if err != nil { + return nil, err + } + logData, err = ioutil.ReadAll(decompressed) + } else { + logData, err = ioutil.ReadAll(logFile) + } + + return +} + +func (we *WorkflowExecutor) evaluatePatternConditions(conditions *[]wfv1.ExceptionCondition) (results []wfv1.ExceptionResult, err error) { + + for _, condition := range *conditions { + if condition.PatternMatched != "" && condition.PatternUnmatched != "" { + errorMessage := fmt.Sprintf("Error condition %s cannot specify both match and unmatch simultaneously", condition.Name) + err = errors.InternalError(errorMessage) + return + } + + logData, err := we.fetchFileForErrorHandling(condition.Source) + if err != nil { + return nil, err + } + + result := wfv1.ExceptionResult{ + Name: condition.Name, + Message: condition.Message, + PodName: we.PodName, + StepName: we.Template.Name, + } + + if condition.PatternMatched != "" { + regex, err := regexp.Compile(condition.PatternMatched) + if err != nil { + return nil, err + } + regexMatch := regex.Find(logData) + if regexMatch != nil { + + results = append(results, result) + } + } else if condition.PatternUnmatched != "" { + regex, err := regexp.Compile(condition.PatternUnmatched) + if err != nil { + return nil, err + } + regexMatch := regex.Find(logData) + if regexMatch == nil { + results = append(results, result) + } + } + } + + return +} + // isTarball returns whether or not the file is a tarball func isTarball(filePath string) bool { cmd := exec.Command("tar", "-tf", filePath) @@ -681,20 +1026,13 @@ func containerID(ctrID string) string { // Wait is the sidecar container logic which waits for the main container to complete. // Also monitors for updates in the pod annotations which may change (e.g. terminate) // Upon completion, kills any sidecars after it finishes. -func (we *WorkflowExecutor) Wait() (err error) { - defer func() { - killSidecarsErr := we.killSidecars() - if killSidecarsErr != nil { - log.Errorf("Failed to kill sidecars: %v", killSidecarsErr) - if err == nil { - // set error only if not already set - err = killSidecarsErr - } - } - }() +func (we *WorkflowExecutor) Wait() error { + err := we.RuntimeExecutor.WaitInit() + if err != nil { + return err + } log.Infof("Waiting on main container") - var mainContainerID string - mainContainerID, err = we.waitMainContainerStart() + mainContainerID, err := we.waitMainContainerStart() if err != nil { return err } @@ -706,49 +1044,87 @@ func (we *WorkflowExecutor) Wait() (err error) { go we.monitorDeadline(ctx, annotationUpdatesCh) err = we.RuntimeExecutor.Wait(mainContainerID) + if err != nil { + return err + } log.Infof("Main container completed") - return + return nil } // waitMainContainerStart waits for the main container to start and returns its container ID. func (we *WorkflowExecutor) waitMainContainerStart() (string, error) { for { - ctrStatus, err := we.GetMainContainerStatus() + podsIf := we.ClientSet.CoreV1().Pods(we.Namespace) + fieldSelector := fields.ParseSelectorOrDie(fmt.Sprintf("metadata.name=%s", we.PodName)) + opts := metav1.ListOptions{ + FieldSelector: fieldSelector.String(), + } + watchIf, err := podsIf.Watch(opts) if err != nil { - return "", err - } - if ctrStatus != nil { - log.Debug(ctrStatus) - if ctrStatus.ContainerID != "" { - we.mainContainerID = containerID(ctrStatus.ContainerID) - return containerID(ctrStatus.ContainerID), nil - } else if ctrStatus.State.Waiting == nil && ctrStatus.State.Running == nil && ctrStatus.State.Terminated == nil { - // status still not ready, wait - time.Sleep(1 * time.Second) - } else if ctrStatus.State.Waiting != nil { - // main container is still in waiting status - time.Sleep(1 * time.Second) - } else { - // main container in running or terminated state but missing container ID - return "", errors.InternalError("Main container ID cannot be found") + return "", errors.InternalWrapErrorf(err, "Failed to establish pod watch: %v", err) + } + for watchEv := range watchIf.ResultChan() { + if watchEv.Type == watch.Error { + return "", errors.InternalErrorf("Pod watch error waiting for main to start: %v", watchEv.Object) + } + pod, ok := watchEv.Object.(*apiv1.Pod) + if !ok { + log.Warnf("Pod watch returned non pod object: %v", watchEv.Object) + continue + } + for _, ctrStatus := range pod.Status.ContainerStatuses { + if ctrStatus.Name == common.MainContainerName { + log.Debug(ctrStatus) + if ctrStatus.ContainerID != "" { + we.mainContainerID = containerID(ctrStatus.ContainerID) + return containerID(ctrStatus.ContainerID), nil + } else if ctrStatus.State.Waiting == nil && ctrStatus.State.Running == nil && ctrStatus.State.Terminated == nil { + // status still not ready, wait + } else if ctrStatus.State.Waiting != nil { + // main container is still in waiting status + } else { + // main container in running or terminated state but missing container ID + return "", errors.InternalError("Main container ID cannot be found") + } + } } } + log.Warnf("Pod watch closed unexpectedly") } } +func watchFileChanges(ctx context.Context, pollInterval time.Duration, filePath string) <-chan struct{} { + res := make(chan struct{}) + go func() { + defer close(res) + + var modTime *time.Time + for { + select { + case <-ctx.Done(): + return + default: + } + + file, err := os.Stat(filePath) + if err != nil { + log.Fatal(err) + } + newModTime := file.ModTime() + if modTime != nil && !modTime.Equal(file.ModTime()) { + res <- struct{}{} + } + modTime = &newModTime + time.Sleep(pollInterval) + } + }() + return res +} + // monitorAnnotations starts a goroutine which monitors for any changes to the pod annotations. // Emits an event on the returned channel upon any updates func (we *WorkflowExecutor) monitorAnnotations(ctx context.Context) <-chan struct{} { log.Infof("Starting annotations monitor") - // Create a fsnotify watcher on the local annotations file to listen for updates from the Downward API - watcher, err := fsnotify.NewWatcher() - if err != nil { - log.Fatal(err) - } - err = watcher.Add(we.PodAnnotationsPath) - if err != nil { - log.Fatal(err) - } // Create a channel to listen for a SIGUSR2. Upon receiving of the signal, we force reload our annotations // directly from kubernetes API. The controller uses this to fast-track notification of annotations @@ -761,12 +1137,12 @@ func (we *WorkflowExecutor) monitorAnnotations(ctx context.Context) <-chan struc // Create a channel which will notify a listener on new updates to the annotations annotationUpdateCh := make(chan struct{}) + annotationChanges := watchFileChanges(ctx, 10*time.Second, we.PodAnnotationsPath) go func() { for { select { case <-ctx.Done(): log.Infof("Annotations monitor stopped") - _ = watcher.Close() signal.Stop(sigs) close(sigs) close(annotationUpdateCh) @@ -775,7 +1151,7 @@ func (we *WorkflowExecutor) monitorAnnotations(ctx context.Context) <-chan struc log.Infof("Received update signal. Reloading annotations from API") annotationUpdateCh <- struct{}{} we.setExecutionControl() - case <-watcher.Events: + case <-annotationChanges: log.Infof("%s updated", we.PodAnnotationsPath) err := we.LoadExecutionControl() if err != nil { @@ -854,8 +1230,8 @@ func (we *WorkflowExecutor) monitorDeadline(ctx context.Context, annotationsUpda } } -// killSidecars kills any sidecars to the main container -func (we *WorkflowExecutor) killSidecars() error { +// KillSidecars kills any sidecars to the main container +func (we *WorkflowExecutor) KillSidecars() error { if len(we.Template.Sidecars) == 0 { log.Infof("No sidecars") return nil @@ -883,15 +1259,6 @@ func (we *WorkflowExecutor) killSidecars() error { return we.RuntimeExecutor.Kill(sidecarIDs) } -// LoadTemplate reads the template definition from the the Kubernetes downward api annotations volume file -func (we *WorkflowExecutor) LoadTemplate() error { - err := unmarshalAnnotationField(we.PodAnnotationsPath, common.AnnotationKeyTemplate, &we.Template) - if err != nil { - return err - } - return nil -} - // LoadExecutionControl reads the execution control definition from the the Kubernetes downward api annotations volume file func (we *WorkflowExecutor) LoadExecutionControl() error { err := unmarshalAnnotationField(we.PodAnnotationsPath, common.AnnotationKeyExecutionControl, &we.ExecutionControl) @@ -904,6 +1271,16 @@ func (we *WorkflowExecutor) LoadExecutionControl() error { return nil } +// LoadTemplate reads the template definition from the the Kubernetes downward api annotations volume file +func LoadTemplate(path string) (*wfv1.Template, error) { + var tmpl wfv1.Template + err := unmarshalAnnotationField(path, common.AnnotationKeyTemplate, &tmpl) + if err != nil { + return nil, err + } + return &tmpl, nil +} + // unmarshalAnnotationField unmarshals the value of an annotation key into the supplied interface // from the downward api annotation volume file func unmarshalAnnotationField(filePath string, key string, into interface{}) error { diff --git a/workflow/executor/executor_test.go b/workflow/executor/executor_test.go index b6c7350c6265..fbcaf5e4e3b9 100644 --- a/workflow/executor/executor_test.go +++ b/workflow/executor/executor_test.go @@ -3,8 +3,8 @@ package executor import ( "testing" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/workflow/executor/mocks" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/executor/mocks" "github.com/stretchr/testify/assert" "k8s.io/client-go/kubernetes/fake" ) diff --git a/workflow/executor/k8sapi/client.go b/workflow/executor/k8sapi/client.go index 5a949595ac2f..fcc8ab9ae14a 100644 --- a/workflow/executor/k8sapi/client.go +++ b/workflow/executor/k8sapi/client.go @@ -9,9 +9,11 @@ import ( "syscall" "time" - "github.com/argoproj/argo/errors" - "github.com/argoproj/argo/workflow/common" - execcommon "github.com/argoproj/argo/workflow/executor/common" + "github.com/cyrusbiotechnology/argo/util" + + "github.com/cyrusbiotechnology/argo/errors" + "github.com/cyrusbiotechnology/argo/workflow/common" + execcommon "github.com/cyrusbiotechnology/argo/workflow/executor/common" "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" @@ -19,14 +21,14 @@ import ( ) type k8sAPIClient struct { - execcommon.KubernetesClientInterface - clientset *kubernetes.Clientset config *restclient.Config podName string namespace string } +var _ execcommon.KubernetesClientInterface = &k8sAPIClient{} + func newK8sAPIClient(clientset *kubernetes.Clientset, config *restclient.Config, podName, namespace string) (*k8sAPIClient, error) { return &k8sAPIClient{ clientset: clientset, @@ -37,7 +39,7 @@ func newK8sAPIClient(clientset *kubernetes.Clientset, config *restclient.Config, } func (c *k8sAPIClient) getFileContents(containerID, sourcePath string) (string, error) { - _, containerStatus, err := c.getContainerStatus(containerID) + _, containerStatus, err := c.GetContainerStatus(containerID) if err != nil { return "", err } @@ -53,8 +55,8 @@ func (c *k8sAPIClient) getFileContents(containerID, sourcePath string) (string, return stdOut.String(), nil } -func (c *k8sAPIClient) createArchive(containerID, sourcePath string) (*bytes.Buffer, error) { - _, containerStatus, err := c.getContainerStatus(containerID) +func (c *k8sAPIClient) CreateArchive(containerID, sourcePath string) (*bytes.Buffer, error) { + _, containerStatus, err := c.GetContainerStatus(containerID) if err != nil { return nil, err } @@ -71,7 +73,7 @@ func (c *k8sAPIClient) createArchive(containerID, sourcePath string) (*bytes.Buf } func (c *k8sAPIClient) getLogsAsStream(containerID string) (io.ReadCloser, error) { - _, containerStatus, err := c.getContainerStatus(containerID) + _, containerStatus, err := c.GetContainerStatus(containerID) if err != nil { return nil, err } @@ -100,7 +102,7 @@ func (c *k8sAPIClient) saveLogs(containerID, path string) error { if err != nil { return errors.InternalWrapError(err) } - defer outFile.Close() + defer util.Close(outFile) _, err = io.Copy(outFile, reader) if err != nil { return errors.InternalWrapError(err) @@ -112,7 +114,7 @@ func (c *k8sAPIClient) getPod() (*v1.Pod, error) { return c.clientset.CoreV1().Pods(c.namespace).Get(c.podName, metav1.GetOptions{}) } -func (c *k8sAPIClient) getContainerStatus(containerID string) (*v1.Pod, *v1.ContainerStatus, error) { +func (c *k8sAPIClient) GetContainerStatus(containerID string) (*v1.Pod, *v1.ContainerStatus, error) { pod, err := c.getPod() if err != nil { return nil, nil, err @@ -130,7 +132,7 @@ func (c *k8sAPIClient) waitForTermination(containerID string, timeout time.Durat return execcommon.WaitForTermination(c, containerID, timeout) } -func (c *k8sAPIClient) killContainer(pod *v1.Pod, container *v1.ContainerStatus, sig syscall.Signal) error { +func (c *k8sAPIClient) KillContainer(pod *v1.Pod, container *v1.ContainerStatus, sig syscall.Signal) error { command := []string{"/bin/sh", "-c", fmt.Sprintf("kill -%d 1", sig)} exec, err := common.ExecPodContainer(c.config, c.namespace, c.podName, container.Name, false, false, command...) if err != nil { diff --git a/workflow/executor/k8sapi/k8sapi.go b/workflow/executor/k8sapi/k8sapi.go index 6f3fd932f705..51832f815529 100644 --- a/workflow/executor/k8sapi/k8sapi.go +++ b/workflow/executor/k8sapi/k8sapi.go @@ -1,10 +1,13 @@ package k8sapi import ( - "github.com/argoproj/argo/errors" + "io" + log "github.com/sirupsen/logrus" "k8s.io/client-go/kubernetes" restclient "k8s.io/client-go/rest" + + "github.com/cyrusbiotechnology/argo/errors" ) type K8sAPIExecutor struct { @@ -30,17 +33,16 @@ func (k *K8sAPIExecutor) CopyFile(containerID string, sourcePath string, destPat return errors.Errorf(errors.CodeNotImplemented, "CopyFile() is not implemented in the k8sapi executor.") } -// GetOutput returns the entirety of the container output as a string -// Used to capturing script results as an output parameter -func (k *K8sAPIExecutor) GetOutput(containerID string) (string, error) { +func (k *K8sAPIExecutor) GetOutputStream(containerID string, combinedOutput bool) (io.ReadCloser, error) { log.Infof("Getting output of %s", containerID) - return k.client.getLogs(containerID) + if !combinedOutput { + log.Warn("non combined output unsupported") + } + return k.client.getLogsAsStream(containerID) } -// Logs copies logs to a given path -func (k *K8sAPIExecutor) Logs(containerID, path string) error { - log.Infof("Saving output of %s to %s", containerID, path) - return k.client.saveLogs(containerID, path) +func (k *K8sAPIExecutor) WaitInit() error { + return nil } // Wait for the container to complete diff --git a/workflow/executor/kubelet/client.go b/workflow/executor/kubelet/client.go index 49730c187535..e708acdff71f 100644 --- a/workflow/executor/kubelet/client.go +++ b/workflow/executor/kubelet/client.go @@ -15,11 +15,9 @@ import ( "syscall" "time" - "github.com/argoproj/argo/util" - - "github.com/argoproj/argo/errors" - "github.com/argoproj/argo/workflow/common" - execcommon "github.com/argoproj/argo/workflow/executor/common" + "github.com/cyrusbiotechnology/argo/errors" + "github.com/cyrusbiotechnology/argo/workflow/common" + execcommon "github.com/cyrusbiotechnology/argo/workflow/executor/common" "github.com/gorilla/websocket" log "github.com/sirupsen/logrus" "k8s.io/api/core/v1" @@ -30,8 +28,6 @@ const ( ) type kubeletClient struct { - execcommon.KubernetesClientInterface - httpClient *http.Client httpHeader http.Header websocketDialer *websocket.Dialer @@ -42,6 +38,8 @@ type kubeletClient struct { kubeletEndpoint string } +var _ execcommon.KubernetesClientInterface = &kubeletClient{} + func newKubeletClient() (*kubeletClient, error) { kubeletHost := os.Getenv(common.EnvVarDownwardAPINodeIP) if kubeletHost == "" { @@ -127,6 +125,26 @@ func (k *kubeletClient) getPodList() (*v1.PodList, error) { return podList, resp.Body.Close() } +func (k *kubeletClient) GetLogStream(containerID string) (io.ReadCloser, error) { + podList, err := k.getPodList() + if err != nil { + return nil, err + } + for _, pod := range podList.Items { + for _, container := range pod.Status.ContainerStatuses { + if execcommon.GetContainerID(&container) != containerID { + continue + } + resp, err := k.doRequestLogs(pod.Namespace, pod.Name, container.Name) + if err != nil { + return nil, err + } + return resp.Body, nil + } + } + return nil, errors.New(errors.CodeNotFound, fmt.Sprintf("containerID %q is not found in the pod list", containerID)) +} + func (k *kubeletClient) doRequestLogs(namespace, podName, containerName string) (*http.Response, error) { u, err := url.ParseRequestURI(fmt.Sprintf("https://%s/containerLogs/%s/%s/%s", k.kubeletEndpoint, namespace, podName, containerName)) if err != nil { @@ -147,39 +165,7 @@ func (k *kubeletClient) doRequestLogs(namespace, podName, containerName string) return resp, nil } -func (k *kubeletClient) getLogs(namespace, podName, containerName string) (string, error) { - resp, err := k.doRequestLogs(namespace, podName, containerName) - if resp != nil { - defer func() { _ = resp.Body.Close() }() - } - if err != nil { - return "", err - } - b, err := ioutil.ReadAll(resp.Body) - if err != nil { - return "", errors.InternalWrapError(err) - } - return string(b), resp.Body.Close() -} - -func (k *kubeletClient) saveLogsToFile(namespace, podName, containerName, path string) error { - resp, err := k.doRequestLogs(namespace, podName, containerName) - if resp != nil { - defer func() { _ = resp.Body.Close() }() - } - if err != nil { - return err - } - outFile, err := os.Create(path) - if err != nil { - return errors.InternalWrapError(err) - } - defer util.Close(outFile) - _, err = io.Copy(outFile, resp.Body) - return err -} - -func (k *kubeletClient) getContainerStatus(containerID string) (*v1.Pod, *v1.ContainerStatus, error) { +func (k *kubeletClient) GetContainerStatus(containerID string) (*v1.Pod, *v1.ContainerStatus, error) { podList, err := k.getPodList() if err != nil { return nil, nil, errors.InternalWrapError(err) @@ -195,38 +181,6 @@ func (k *kubeletClient) getContainerStatus(containerID string) (*v1.Pod, *v1.Con return nil, nil, errors.New(errors.CodeNotFound, fmt.Sprintf("containerID %q is not found in the pod list", containerID)) } -func (k *kubeletClient) GetContainerLogs(containerID string) (string, error) { - podList, err := k.getPodList() - if err != nil { - return "", errors.InternalWrapError(err) - } - for _, pod := range podList.Items { - for _, container := range pod.Status.ContainerStatuses { - if execcommon.GetContainerID(&container) != containerID { - continue - } - return k.getLogs(pod.Namespace, pod.Name, container.Name) - } - } - return "", errors.New(errors.CodeNotFound, fmt.Sprintf("containerID %q is not found in the pod list", containerID)) -} - -func (k *kubeletClient) SaveLogsToFile(containerID, path string) error { - podList, err := k.getPodList() - if err != nil { - return errors.InternalWrapError(err) - } - for _, pod := range podList.Items { - for _, container := range pod.Status.ContainerStatuses { - if execcommon.GetContainerID(&container) != containerID { - continue - } - return k.saveLogsToFile(pod.Namespace, pod.Name, container.Name, path) - } - } - return errors.New(errors.CodeNotFound, fmt.Sprintf("containerID %q is not found in the pod list", containerID)) -} - func (k *kubeletClient) exec(u *url.URL) (*url.URL, error) { _, resp, err := k.websocketDialer.Dial(u.String(), k.httpHeader) if resp == nil { @@ -288,7 +242,7 @@ func (k *kubeletClient) readFileContents(u *url.URL) (*bytes.Buffer, error) { } // createArchive exec in the given containerID and create a tarball of the given sourcePath. Works with directory -func (k *kubeletClient) createArchive(containerID, sourcePath string) (*bytes.Buffer, error) { +func (k *kubeletClient) CreateArchive(containerID, sourcePath string) (*bytes.Buffer, error) { return k.getCommandOutput(containerID, fmt.Sprintf("command=tar&command=-cf&command=-&command=%s&output=1", sourcePath)) } @@ -330,7 +284,7 @@ func (k *kubeletClient) WaitForTermination(containerID string, timeout time.Dura return execcommon.WaitForTermination(k, containerID, timeout) } -func (k *kubeletClient) killContainer(pod *v1.Pod, container *v1.ContainerStatus, sig syscall.Signal) error { +func (k *kubeletClient) KillContainer(pod *v1.Pod, container *v1.ContainerStatus, sig syscall.Signal) error { u, err := url.ParseRequestURI(fmt.Sprintf("wss://%s/exec/%s/%s/%s?command=/bin/sh&&command=-c&command=kill+-%d+1&output=1&error=1", k.kubeletEndpoint, pod.Namespace, pod.Name, container.Name, sig)) if err != nil { return errors.InternalWrapError(err) diff --git a/workflow/executor/kubelet/kubelet.go b/workflow/executor/kubelet/kubelet.go index 6cd8f9a482f0..41b9d6689fd0 100644 --- a/workflow/executor/kubelet/kubelet.go +++ b/workflow/executor/kubelet/kubelet.go @@ -1,7 +1,9 @@ package kubelet import ( - "github.com/argoproj/argo/errors" + "io" + + "github.com/cyrusbiotechnology/argo/errors" log "github.com/sirupsen/logrus" ) @@ -28,15 +30,15 @@ func (k *KubeletExecutor) CopyFile(containerID string, sourcePath string, destPa return errors.Errorf(errors.CodeNotImplemented, "CopyFile() is not implemented in the kubelet executor.") } -// GetOutput returns the entirety of the container output as a string -// Used to capturing script results as an output parameter -func (k *KubeletExecutor) GetOutput(containerID string) (string, error) { - return k.cli.GetContainerLogs(containerID) +func (k *KubeletExecutor) GetOutputStream(containerID string, combinedOutput bool) (io.ReadCloser, error) { + if !combinedOutput { + log.Warn("non combined output unsupported") + } + return k.cli.GetLogStream(containerID) } -// Logs copies logs to a given path -func (k *KubeletExecutor) Logs(containerID, path string) error { - return k.cli.SaveLogsToFile(containerID, path) +func (k *KubeletExecutor) WaitInit() error { + return nil } // Wait for the container to complete diff --git a/workflow/executor/mocks/ContainerRuntimeExecutor.go b/workflow/executor/mocks/ContainerRuntimeExecutor.go index df574d2da817..55046f8fe877 100644 --- a/workflow/executor/mocks/ContainerRuntimeExecutor.go +++ b/workflow/executor/mocks/ContainerRuntimeExecutor.go @@ -1,6 +1,8 @@ -// Code generated by mockery v1.0.0 +// Code generated by mockery v1.0.0. DO NOT EDIT. + package mocks +import io "io" import mock "github.com/stretchr/testify/mock" // ContainerRuntimeExecutor is an autogenerated mock type for the ContainerRuntimeExecutor type @@ -43,20 +45,22 @@ func (_m *ContainerRuntimeExecutor) GetFileContents(containerID string, sourcePa return r0, r1 } -// GetOutput provides a mock function with given fields: containerID -func (_m *ContainerRuntimeExecutor) GetOutput(containerID string) (string, error) { - ret := _m.Called(containerID) +// GetOutputStream provides a mock function with given fields: containerID, combinedOutput +func (_m *ContainerRuntimeExecutor) GetOutputStream(containerID string, combinedOutput bool) (io.ReadCloser, error) { + ret := _m.Called(containerID, combinedOutput) - var r0 string - if rf, ok := ret.Get(0).(func(string) string); ok { - r0 = rf(containerID) + var r0 io.ReadCloser + if rf, ok := ret.Get(0).(func(string, bool) io.ReadCloser); ok { + r0 = rf(containerID, combinedOutput) } else { - r0 = ret.Get(0).(string) + if ret.Get(0) != nil { + r0 = ret.Get(0).(io.ReadCloser) + } } var r1 error - if rf, ok := ret.Get(1).(func(string) error); ok { - r1 = rf(containerID) + if rf, ok := ret.Get(1).(func(string, bool) error); ok { + r1 = rf(containerID, combinedOutput) } else { r1 = ret.Error(1) } @@ -78,13 +82,13 @@ func (_m *ContainerRuntimeExecutor) Kill(containerIDs []string) error { return r0 } -// Logs provides a mock function with given fields: containerID, path -func (_m *ContainerRuntimeExecutor) Logs(containerID string, path string) error { - ret := _m.Called(containerID, path) +// Wait provides a mock function with given fields: containerID +func (_m *ContainerRuntimeExecutor) Wait(containerID string) error { + ret := _m.Called(containerID) var r0 error - if rf, ok := ret.Get(0).(func(string, string) error); ok { - r0 = rf(containerID, path) + if rf, ok := ret.Get(0).(func(string) error); ok { + r0 = rf(containerID) } else { r0 = ret.Error(0) } @@ -92,13 +96,13 @@ func (_m *ContainerRuntimeExecutor) Logs(containerID string, path string) error return r0 } -// Wait provides a mock function with given fields: containerID -func (_m *ContainerRuntimeExecutor) Wait(containerID string) error { - ret := _m.Called(containerID) +// WaitInit provides a mock function with given fields: +func (_m *ContainerRuntimeExecutor) WaitInit() error { + ret := _m.Called() var r0 error - if rf, ok := ret.Get(0).(func(string) error); ok { - r0 = rf(containerID) + if rf, ok := ret.Get(0).(func() error); ok { + r0 = rf() } else { r0 = ret.Error(0) } diff --git a/workflow/executor/pns/pns.go b/workflow/executor/pns/pns.go new file mode 100644 index 000000000000..a85b59e9a388 --- /dev/null +++ b/workflow/executor/pns/pns.go @@ -0,0 +1,385 @@ +package pns + +import ( + "bufio" + "fmt" + "io" + "io/ioutil" + "os" + "strings" + "sync" + "syscall" + "time" + + executil "github.com/argoproj/pkg/exec" + gops "github.com/mitchellh/go-ps" + log "github.com/sirupsen/logrus" + v1 "k8s.io/api/core/v1" + "k8s.io/client-go/kubernetes" + + "github.com/cyrusbiotechnology/argo/errors" + "github.com/cyrusbiotechnology/argo/util/archive" + "github.com/cyrusbiotechnology/argo/workflow/common" + execcommon "github.com/cyrusbiotechnology/argo/workflow/executor/common" +) + +type PNSExecutor struct { + clientset *kubernetes.Clientset + podName string + namespace string + + // ctrIDToPid maps a containerID to a process ID + ctrIDToPid map[string]int + // pidToCtrID maps a process ID to a container ID + pidToCtrID map[int]string + + // pidFileHandles holds file handles to all root containers + pidFileHandles map[int]*fileInfo + + // thisPID is the pid of this process + thisPID int + // mainPID holds the main container's pid + mainPID int + // mainFS holds a file descriptor to the main filesystem, allowing the executor to access the + // filesystem after the main process exited + mainFS *os.File + // rootFS holds a file descriptor to the root filesystem, allowing the executor to exit out of a chroot + rootFS *os.File + // debug enables additional debugging + debug bool + // hasOutputs indicates if the template has outputs. determines if we need to + hasOutputs bool +} + +type fileInfo struct { + file os.File + info os.FileInfo +} + +func NewPNSExecutor(clientset *kubernetes.Clientset, podName, namespace string, hasOutputs bool) (*PNSExecutor, error) { + thisPID := os.Getpid() + log.Infof("Creating PNS executor (namespace: %s, pod: %s, pid: %d, hasOutputs: %v)", namespace, podName, thisPID, hasOutputs) + if thisPID == 1 { + return nil, errors.New(errors.CodeBadRequest, "process namespace sharing is not enabled on pod") + } + return &PNSExecutor{ + clientset: clientset, + podName: podName, + namespace: namespace, + ctrIDToPid: make(map[string]int), + pidToCtrID: make(map[int]string), + pidFileHandles: make(map[int]*fileInfo), + thisPID: thisPID, + debug: log.GetLevel() == log.DebugLevel, + hasOutputs: hasOutputs, + }, nil +} + +func (p *PNSExecutor) GetFileContents(containerID string, sourcePath string) (string, error) { + err := p.enterChroot() + if err != nil { + return "", err + } + defer func() { _ = p.exitChroot() }() + out, err := ioutil.ReadFile(sourcePath) + if err != nil { + return "", err + } + return string(out), nil +} + +// enterChroot enters chroot of the main container +func (p *PNSExecutor) enterChroot() error { + if p.mainFS == nil { + return errors.InternalErrorf("could not chroot into main for artifact collection: container may have exited too quickly") + } + if err := p.mainFS.Chdir(); err != nil { + return errors.InternalWrapErrorf(err, "failed to chdir to main filesystem: %v", err) + } + err := syscall.Chroot(".") + if err != nil { + return errors.InternalWrapErrorf(err, "failed to chroot to main filesystem: %v", err) + } + return nil +} + +// exitChroot exits chroot +func (p *PNSExecutor) exitChroot() error { + if err := p.rootFS.Chdir(); err != nil { + return errors.InternalWrapError(err) + } + err := syscall.Chroot(".") + if err != nil { + return errors.InternalWrapError(err) + } + return nil +} + +// CopyFile copies a source file in a container to a local path +func (p *PNSExecutor) CopyFile(containerID string, sourcePath string, destPath string) (err error) { + destFile, err := os.Create(destPath) + if err != nil { + return err + } + defer func() { + // exit chroot and close the file. preserve the original error + deferErr := p.exitChroot() + if err == nil && deferErr != nil { + err = errors.InternalWrapError(deferErr) + } + deferErr = destFile.Close() + if err == nil && deferErr != nil { + err = errors.InternalWrapError(deferErr) + } + }() + w := bufio.NewWriter(destFile) + err = p.enterChroot() + if err != nil { + return err + } + + err = archive.TarGzToWriter(sourcePath, w) + if err != nil { + return err + } + + return nil +} + +func (p *PNSExecutor) WaitInit() error { + if !p.hasOutputs { + return nil + } + go p.pollRootProcesses(time.Minute) + // Secure a filehandle on our own root. This is because we will chroot back and forth from + // the main container's filesystem, to our own. + rootFS, err := os.Open("/") + if err != nil { + return errors.InternalWrapError(err) + } + p.rootFS = rootFS + return nil +} + +// Wait for the container to complete +func (p *PNSExecutor) Wait(containerID string) error { + mainPID, err := p.getContainerPID(containerID) + if err != nil { + if !p.hasOutputs { + log.Warnf("Ignoring wait failure: %v. Process assumed to have completed", err) + return nil + } + return err + } + log.Infof("Main pid identified as %d", mainPID) + p.mainPID = mainPID + for pid, f := range p.pidFileHandles { + if pid == p.mainPID { + log.Info("Successfully secured file handle on main container root filesystem") + p.mainFS = &f.file + } else { + log.Infof("Closing root filehandle for non-main pid %d", pid) + _ = f.file.Close() + } + } + if p.mainFS == nil { + log.Warn("Failed to secure file handle on main container's root filesystem. Output artifacts from base image layer will fail") + } + + // wait for pid to complete + log.Infof("Waiting for main pid %d to complete", mainPID) + err = executil.WaitPID(mainPID) + if err != nil { + return err + } + log.Infof("Main pid %d completed", mainPID) + return nil +} + +// pollRootProcesses will poll /proc for root pids (pids without parents) in a tight loop, for the +// purpose of securing an open file handle against /proc//root as soon as possible. +// It opens file handles on all root pids because at this point, we do not yet know which pid is the +// "main" container. +// Polling is necessary because it is not possible to use something like fsnotify against procfs. +func (p *PNSExecutor) pollRootProcesses(timeout time.Duration) { + log.Warnf("Polling root processes (%v)", timeout) + deadline := time.Now().Add(timeout) + for { + p.updateCtrIDMap() + if p.mainFS != nil { + log.Info("Stopped root processes polling due to successful securing of main root fs") + break + } + if time.Now().After(deadline) { + log.Warnf("Polling root processes timed out (%v)", timeout) + break + } + time.Sleep(50 * time.Millisecond) + } +} + +func (p *PNSExecutor) GetOutputStream(containerID string, combinedOutput bool) (io.ReadCloser, error) { + if !combinedOutput { + log.Warn("non combined output unsupported") + } + opts := v1.PodLogOptions{ + Container: common.MainContainerName, + } + return p.clientset.CoreV1().Pods(p.namespace).GetLogs(p.podName, &opts).Stream() +} + +// Kill a list of containerIDs first with a SIGTERM then with a SIGKILL after a grace period +func (p *PNSExecutor) Kill(containerIDs []string) error { + var asyncErr error + wg := sync.WaitGroup{} + for _, cid := range containerIDs { + wg.Add(1) + go func(containerID string) { + err := p.killContainer(containerID) + if err != nil && asyncErr != nil { + asyncErr = err + } + wg.Done() + }(cid) + } + wg.Wait() + return asyncErr +} + +func (p *PNSExecutor) killContainer(containerID string) error { + pid, err := p.getContainerPID(containerID) + if err != nil { + log.Warnf("Ignoring kill container failure of %s: %v. Process assumed to have completed", containerID, err) + return nil + } + // On Unix systems, FindProcess always succeeds and returns a Process + // for the given pid, regardless of whether the process exists. + proc, _ := os.FindProcess(pid) + log.Infof("Sending SIGTERM to pid %d", pid) + err = proc.Signal(syscall.SIGTERM) + if err != nil { + log.Warnf("Failed to SIGTERM pid %d: %v", pid, err) + } + + waitPIDOpts := executil.WaitPIDOpts{Timeout: execcommon.KillGracePeriod * time.Second} + err = executil.WaitPID(pid, waitPIDOpts) + if err == nil { + log.Infof("PID %d completed", pid) + return nil + } + if err != executil.ErrWaitPIDTimeout { + return err + } + log.Warnf("Timed out (%v) waiting for pid %d to complete after SIGTERM. Issing SIGKILL", waitPIDOpts.Timeout, pid) + time.Sleep(30 * time.Minute) + err = proc.Signal(syscall.SIGKILL) + if err != nil { + log.Warnf("Failed to SIGKILL pid %d: %v", pid, err) + } + return err +} + +// getContainerPID returns the pid associated with the container id. Returns error if it was unable +// to be determined because no running root processes exist with that container ID +func (p *PNSExecutor) getContainerPID(containerID string) (int, error) { + pid, ok := p.ctrIDToPid[containerID] + if ok { + return pid, nil + } + p.updateCtrIDMap() + pid, ok = p.ctrIDToPid[containerID] + if !ok { + return -1, errors.InternalErrorf("Failed to determine pid for containerID %s: container may have exited too quickly", containerID) + } + return pid, nil +} + +// updateCtrIDMap updates the mapping between container IDs to PIDs +func (p *PNSExecutor) updateCtrIDMap() { + allProcs, err := gops.Processes() + if err != nil { + log.Warnf("Failed to list processes: %v", err) + return + } + for _, proc := range allProcs { + pid := proc.Pid() + if pid == 1 || pid == p.thisPID || proc.PPid() != 0 { + // ignore the pause container, our own pid, and non-root processes + continue + } + + // Useful code for debugging: + if p.debug { + if data, err := ioutil.ReadFile(fmt.Sprintf("/proc/%d/root", pid) + "/etc/os-release"); err == nil { + log.Infof("pid %d: %s", pid, string(data)) + _, _ = parseContainerID(pid) + } + } + + if p.hasOutputs && p.mainFS == nil { + rootPath := fmt.Sprintf("/proc/%d/root", pid) + currInfo, err := os.Stat(rootPath) + if err != nil { + log.Warnf("Failed to stat %s: %v", rootPath, err) + continue + } + log.Infof("pid %d: %v", pid, currInfo) + prevInfo := p.pidFileHandles[pid] + + // Secure the root filehandle of the process. NOTE if the file changed, it means that + // the main container may have switched (e.g. gone from busybox to the user's container) + if prevInfo == nil || !os.SameFile(prevInfo.info, currInfo) { + fs, err := os.Open(rootPath) + if err != nil { + log.Warnf("Failed to open %s: %v", rootPath, err) + continue + } + log.Infof("Secured filehandle on %s", rootPath) + p.pidFileHandles[pid] = &fileInfo{ + info: currInfo, + file: *fs, + } + if prevInfo != nil { + _ = prevInfo.file.Close() + } + } + } + + // Update maps of pids to container ids + if _, ok := p.pidToCtrID[pid]; !ok { + containerID, err := parseContainerID(pid) + if err != nil { + log.Warnf("Failed to identify containerID for process %d", pid) + continue + } + log.Infof("containerID %s mapped to pid %d", containerID, pid) + p.ctrIDToPid[containerID] = pid + p.pidToCtrID[pid] = containerID + } + } +} + +// parseContainerID parses the containerID of a pid +func parseContainerID(pid int) (string, error) { + cgroupPath := fmt.Sprintf("/proc/%d/cgroup", pid) + cgroupFile, err := os.OpenFile(cgroupPath, os.O_RDONLY, os.ModePerm) + if err != nil { + return "", errors.InternalWrapError(err) + } + defer func() { _ = cgroupFile.Close() }() + sc := bufio.NewScanner(cgroupFile) + for sc.Scan() { + // See https://www.systutorials.com/docs/linux/man/5-proc/ for /proc/XX/cgroup format. e.g.: + // 5:cpuacct,cpu,cpuset:/daemons + line := sc.Text() + log.Debugf("pid %d: %s", pid, line) + parts := strings.Split(line, "/") + if len(parts) > 1 { + if containerID := parts[len(parts)-1]; containerID != "" { + // need to check for empty string because the line may look like: 5:rdma:/ + return containerID, nil + } + } + } + return "", errors.InternalErrorf("Failed to parse container ID from %s", cgroupPath) +} diff --git a/workflow/executor/resource.go b/workflow/executor/resource.go index 086834509f70..7345f99d6cca 100644 --- a/workflow/executor/resource.go +++ b/workflow/executor/resource.go @@ -3,12 +3,16 @@ package executor import ( "bufio" "bytes" + "encoding/json" "fmt" + "io/ioutil" "os/exec" "strings" "time" - "github.com/argoproj/argo/errors" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + + "github.com/cyrusbiotechnology/argo/errors" log "github.com/sirupsen/logrus" "github.com/tidwall/gjson" "k8s.io/apimachinery/pkg/labels" @@ -16,28 +20,58 @@ import ( ) // ExecResource will run kubectl action against a manifest -func (we *WorkflowExecutor) ExecResource(action string, manifestPath string) (string, error) { +func (we *WorkflowExecutor) ExecResource(action string, manifestPath string, isDelete bool) (string, string, error) { args := []string{ action, } - if action == "delete" { + output := "json" + if isDelete { args = append(args, "--ignore-not-found") + output = "name" + } + + if action == "patch" { + mergeStrategy := "strategic" + if we.Template.Resource.MergeStrategy != "" { + mergeStrategy = we.Template.Resource.MergeStrategy + } + + args = append(args, "--type") + args = append(args, mergeStrategy) + + args = append(args, "-p") + buff, err := ioutil.ReadFile(manifestPath) + + if err != nil { + return "", "", errors.New(errors.CodeBadRequest, err.Error()) + } + + args = append(args, string(buff)) } + args = append(args, "-f") args = append(args, manifestPath) args = append(args, "-o") - args = append(args, "name") + args = append(args, output) cmd := exec.Command("kubectl", args...) log.Info(strings.Join(cmd.Args, " ")) out, err := cmd.Output() if err != nil { exErr := err.(*exec.ExitError) errMsg := strings.TrimSpace(string(exErr.Stderr)) - return "", errors.New(errors.CodeBadRequest, errMsg) + return "", "", errors.New(errors.CodeBadRequest, errMsg) + } + if action == "delete" { + return "", "", nil + } + obj := unstructured.Unstructured{} + err = json.Unmarshal(out, &obj) + if err != nil { + return "", "", err } - resourceName := strings.TrimSpace(string(out)) - log.Infof(resourceName) - return resourceName, nil + resourceName := fmt.Sprintf("%s.%s/%s", obj.GroupVersionKind().Kind, obj.GroupVersionKind().Group, obj.GetName()) + log.Infof("%s/%s", obj.GetNamespace(), resourceName) + return obj.GetNamespace(), resourceName, nil } // gjsonLabels is an implementation of labels.Labels interface @@ -58,7 +92,7 @@ func (g gjsonLabels) Get(label string) string { } // WaitResource waits for a specific resource to satisfy either the success or failure condition -func (we *WorkflowExecutor) WaitResource(resourceName string) error { +func (we *WorkflowExecutor) WaitResource(resourceNamespace string, resourceName string) error { if we.Template.Resource.SuccessCondition == "" && we.Template.Resource.FailureCondition == "" { return nil } @@ -82,12 +116,11 @@ func (we *WorkflowExecutor) WaitResource(resourceName string) error { failReqs, _ = failSelector.Requirements() } - // Start the condition result reader using ExponentialBackoff - // Exponential backoff is for steps of 0, 5, 20, 80, 320 seconds since the first step is without - // delay in the ExponentialBackoff - err := wait.ExponentialBackoff(wait.Backoff{Duration: (time.Second * 5), Factor: 4.0, Steps: 5}, + // Start the condition result reader using PollImmediateInfinite + // Poll intervall of 5 seconds serves as a backoff intervall in case of immediate result reader failure + err := wait.PollImmediateInfinite(time.Second*5, func() (bool, error) { - isErrRetry, err := checkResourceState(resourceName, successReqs, failReqs) + isErrRetry, err := checkResourceState(resourceNamespace, resourceName, successReqs, failReqs) if err == nil { log.Infof("Returning from successful wait for resource %s", resourceName) @@ -115,9 +148,9 @@ func (we *WorkflowExecutor) WaitResource(resourceName string) error { } // Function to do the kubectl get -w command and then waiting on json reading. -func checkResourceState(resourceName string, successReqs labels.Requirements, failReqs labels.Requirements) (bool, error) { +func checkResourceState(resourceNamespace string, resourceName string, successReqs labels.Requirements, failReqs labels.Requirements) (bool, error) { - cmd, reader, err := startKubectlWaitCmd(resourceName) + cmd, reader, err := startKubectlWaitCmd(resourceNamespace, resourceName) if err != nil { return false, err } @@ -180,8 +213,12 @@ func checkResourceState(resourceName string, successReqs labels.Requirements, fa } // Start Kubectl command Get with -w return error if unable to start command -func startKubectlWaitCmd(resourceName string) (*exec.Cmd, *bufio.Reader, error) { - cmd := exec.Command("kubectl", "get", resourceName, "-w", "-o", "json") +func startKubectlWaitCmd(resourceNamespace string, resourceName string) (*exec.Cmd, *bufio.Reader, error) { + args := []string{"get", resourceName, "-w", "-o", "json"} + if resourceNamespace != "" { + args = append(args, "-n", resourceNamespace) + } + cmd := exec.Command("kubectl", args...) stdout, err := cmd.StdoutPipe() if err != nil { return nil, nil, errors.InternalWrapError(err) @@ -217,7 +254,7 @@ func readJSON(reader *bufio.Reader) ([]byte, error) { } // SaveResourceParameters will save any resource output parameters -func (we *WorkflowExecutor) SaveResourceParameters(resourceName string) error { +func (we *WorkflowExecutor) SaveResourceParameters(resourceNamespace string, resourceName string) error { if len(we.Template.Outputs.Parameters) == 0 { log.Infof("No output parameters") return nil @@ -229,9 +266,17 @@ func (we *WorkflowExecutor) SaveResourceParameters(resourceName string) error { } var cmd *exec.Cmd if param.ValueFrom.JSONPath != "" { - cmd = exec.Command("kubectl", "get", resourceName, "-o", fmt.Sprintf("jsonpath='%s'", param.ValueFrom.JSONPath)) + args := []string{"get", resourceName, "-o", fmt.Sprintf("jsonpath=%s", param.ValueFrom.JSONPath)} + if resourceNamespace != "" { + args = append(args, "-n", resourceNamespace) + } + cmd = exec.Command("kubectl", args...) } else if param.ValueFrom.JQFilter != "" { - cmdStr := fmt.Sprintf("kubectl get %s -o json | jq -c '%s'", resourceName, param.ValueFrom.JQFilter) + resArgs := []string{resourceName} + if resourceNamespace != "" { + resArgs = append(resArgs, "-n", resourceNamespace) + } + cmdStr := fmt.Sprintf("kubectl get %s -o json | jq -c '%s'", strings.Join(resArgs, " "), param.ValueFrom.JQFilter) cmd = exec.Command("sh", "-c", cmdStr) } else { continue diff --git a/workflow/metrics/collector.go b/workflow/metrics/collector.go index 960074d73cc4..52590b3051e4 100644 --- a/workflow/metrics/collector.go +++ b/workflow/metrics/collector.go @@ -7,8 +7,8 @@ import ( "github.com/prometheus/client_golang/prometheus" "k8s.io/client-go/tools/cache" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/workflow/util" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/util" ) var ( @@ -112,16 +112,12 @@ func (wc *workflowCollector) collectWorkflow(ch chan<- prometheus.Metric, wf wfv addGauge(descWorkflowInfo, 1, wf.Spec.Entrypoint, wf.Spec.ServiceAccountName, joinTemplates(wf.Spec.Templates)) - if phase := wf.Status.Phase; phase != "" { - // TODO: we do not have queuing feature yet so are not adding to a 'Pending' guague. - // Uncomment when we support queueing. - //addGauge(descWorkflowStatusPhase, boolFloat64(phase == wfv1.NodePending), string(wfv1.NodePending)) - addGauge(descWorkflowStatusPhase, boolFloat64(phase == wfv1.NodeRunning), string(wfv1.NodeRunning)) - addGauge(descWorkflowStatusPhase, boolFloat64(phase == wfv1.NodeSucceeded), string(wfv1.NodeSucceeded)) - addGauge(descWorkflowStatusPhase, boolFloat64(phase == wfv1.NodeSkipped), string(wfv1.NodeSkipped)) - addGauge(descWorkflowStatusPhase, boolFloat64(phase == wfv1.NodeFailed), string(wfv1.NodeFailed)) - addGauge(descWorkflowStatusPhase, boolFloat64(phase == wfv1.NodeError), string(wfv1.NodeError)) - } + addGauge(descWorkflowStatusPhase, boolFloat64(wf.Status.Phase == wfv1.NodePending || wf.Status.Phase == ""), string(wfv1.NodePending)) + addGauge(descWorkflowStatusPhase, boolFloat64(wf.Status.Phase == wfv1.NodeRunning), string(wfv1.NodeRunning)) + addGauge(descWorkflowStatusPhase, boolFloat64(wf.Status.Phase == wfv1.NodeSucceeded), string(wfv1.NodeSucceeded)) + addGauge(descWorkflowStatusPhase, boolFloat64(wf.Status.Phase == wfv1.NodeSkipped), string(wfv1.NodeSkipped)) + addGauge(descWorkflowStatusPhase, boolFloat64(wf.Status.Phase == wfv1.NodeFailed), string(wfv1.NodeFailed)) + addGauge(descWorkflowStatusPhase, boolFloat64(wf.Status.Phase == wfv1.NodeError), string(wfv1.NodeError)) if !wf.CreationTimestamp.IsZero() { addGauge(descWorkflowCreated, float64(wf.CreationTimestamp.Unix())) diff --git a/workflow/metrics/server.go b/workflow/metrics/server.go index 65153e23a044..f2ed0bf63d66 100644 --- a/workflow/metrics/server.go +++ b/workflow/metrics/server.go @@ -2,6 +2,7 @@ package metrics import ( "context" + "fmt" "net/http" "github.com/prometheus/client_golang/prometheus" @@ -20,7 +21,7 @@ type PrometheusConfig struct { func RunServer(ctx context.Context, config PrometheusConfig, registry *prometheus.Registry) { mux := http.NewServeMux() mux.Handle(config.Path, promhttp.HandlerFor(registry, promhttp.HandlerOpts{})) - srv := &http.Server{Addr: config.Port, Handler: mux} + srv := &http.Server{Addr: fmt.Sprintf(":%s", config.Port), Handler: mux} defer func() { if cerr := srv.Close(); cerr != nil { @@ -28,7 +29,7 @@ func RunServer(ctx context.Context, config PrometheusConfig, registry *prometheu } }() - log.Infof("Starting prometheus metrics server at 0.0.0.0%s%s", config.Port, config.Path) + log.Infof("Starting prometheus metrics server at 0.0.0.0:%s%s", config.Port, config.Path) if err := srv.ListenAndServe(); err != nil { panic(err) } diff --git a/workflow/ttlcontroller/ttlcontroller.go b/workflow/ttlcontroller/ttlcontroller.go index 91d2b992bd52..320da041872b 100644 --- a/workflow/ttlcontroller/ttlcontroller.go +++ b/workflow/ttlcontroller/ttlcontroller.go @@ -17,10 +17,10 @@ import ( "k8s.io/client-go/tools/cache" "k8s.io/client-go/util/workqueue" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - wfclientset "github.com/argoproj/argo/pkg/client/clientset/versioned" - "github.com/argoproj/argo/workflow/common" - "github.com/argoproj/argo/workflow/util" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + wfclientset "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned" + "github.com/cyrusbiotechnology/argo/workflow/common" + "github.com/cyrusbiotechnology/argo/workflow/util" ) const ( @@ -130,7 +130,12 @@ func (c *Controller) processNextWorkItem() bool { // enqueueWF conditionally queues a workflow to the ttl queue if it is within the deletion period func (c *Controller) enqueueWF(obj interface{}) { - wf, err := util.FromUnstructured(obj.(*unstructured.Unstructured)) + un, ok := obj.(*unstructured.Unstructured) + if !ok { + log.Warnf("'%v' is not an unstructured", obj) + return + } + wf, err := util.FromUnstructured(un) if err != nil { log.Warnf("Failed to unmarshal workflow %v object: %v", obj, err) return diff --git a/workflow/ttlcontroller/ttlcontroller_test.go b/workflow/ttlcontroller/ttlcontroller_test.go index 03d13ed3b2c2..ee3b1fdbfb23 100644 --- a/workflow/ttlcontroller/ttlcontroller_test.go +++ b/workflow/ttlcontroller/ttlcontroller_test.go @@ -4,9 +4,9 @@ import ( "testing" "time" - fakewfclientset "github.com/argoproj/argo/pkg/client/clientset/versioned/fake" - "github.com/argoproj/argo/test" - "github.com/argoproj/argo/workflow/util" + fakewfclientset "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned/fake" + "github.com/cyrusbiotechnology/argo/test" + "github.com/cyrusbiotechnology/argo/workflow/util" "github.com/stretchr/testify/assert" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" diff --git a/workflow/util/util.go b/workflow/util/util.go index 40d0bb4ffe24..aaa2911472be 100644 --- a/workflow/util/util.go +++ b/workflow/util/util.go @@ -26,23 +26,25 @@ import ( "k8s.io/client-go/kubernetes" "k8s.io/client-go/rest" "k8s.io/client-go/tools/cache" - - "github.com/argoproj/argo/errors" - "github.com/argoproj/argo/pkg/apis/workflow" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" - cmdutil "github.com/argoproj/argo/util/cmd" - "github.com/argoproj/argo/util/retry" - unstructutil "github.com/argoproj/argo/util/unstructured" - "github.com/argoproj/argo/workflow/common" - "github.com/argoproj/argo/workflow/validate" + "k8s.io/utils/pointer" + + "github.com/cyrusbiotechnology/argo/errors" + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/pkg/client/clientset/versioned/typed/workflow/v1alpha1" + cmdutil "github.com/cyrusbiotechnology/argo/util/cmd" + "github.com/cyrusbiotechnology/argo/util/file" + "github.com/cyrusbiotechnology/argo/util/retry" + unstructutil "github.com/cyrusbiotechnology/argo/util/unstructured" + "github.com/cyrusbiotechnology/argo/workflow/common" + "github.com/cyrusbiotechnology/argo/workflow/validate" ) // NewWorkflowInformer returns the workflow informer used by the controller. This is actually // a custom built UnstructuredInformer which is in actuality returning unstructured.Unstructured // objects. We no longer return WorkflowInformer due to: // https://github.com/kubernetes/kubernetes/issues/57705 -// https://github.com/argoproj/argo/issues/632 +// https://github.com/cyrusbiotechnology/argo/issues/632 func NewWorkflowInformer(cfg *rest.Config, ns string, resyncPeriod time.Duration, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer { dclient, err := dynamic.NewForConfig(cfg) if err != nil { @@ -132,13 +134,14 @@ func IsWorkflowCompleted(wf *wfv1.Workflow) bool { // SubmitOpts are workflow submission options type SubmitOpts struct { - Name string // --name - GenerateName string // --generate-name - InstanceID string // --instanceid - Entrypoint string // --entrypoint - Parameters []string // --parameter - ParameterFile string // --parameter-file - ServiceAccount string // --serviceaccount + Name string // --name + GenerateName string // --generate-name + InstanceID string // --instanceid + Entrypoint string // --entrypoint + Parameters []string // --parameter + ParameterFile string // --parameter-file + ServiceAccount string // --serviceaccount + OwnerReference *metav1.OwnerReference // useful if your custom controller creates argo workflow resources } // SubmitWorkflow validates and submit a single workflow and override some of the fields of the workflow @@ -233,7 +236,11 @@ func SubmitWorkflow(wfIf v1alpha1.WorkflowInterface, wf *wfv1.Workflow, opts *Su if opts.Name != "" { wf.ObjectMeta.Name = opts.Name } - err := validate.ValidateWorkflow(wf) + if opts.OwnerReference != nil { + wf.SetOwnerReferences(append(wf.GetOwnerReferences(), *opts.OwnerReference)) + } + + err := validate.ValidateWorkflow(wf, validate.ValidateOpts{}) if err != nil { return nil, err } @@ -251,8 +258,7 @@ func SuspendWorkflow(wfIf v1alpha1.WorkflowInterface, workflowName string) error return false, errSuspendedCompletedWorkflow } if wf.Spec.Suspend == nil || *wf.Spec.Suspend != true { - t := true - wf.Spec.Suspend = &t + wf.Spec.Suspend = pointer.BoolPtr(true) wf, err = wfIf.Update(wf) if err != nil { if apierr.IsConflict(err) { @@ -520,3 +526,19 @@ func TerminateWorkflow(wfClient v1alpha1.WorkflowInterface, name string) error { } return err } + +// DecompressWorkflow decompresses the compressed status of a workflow (if compressed) +func DecompressWorkflow(wf *wfv1.Workflow) error { + if wf.Status.CompressedNodes != "" { + nodeContent, err := file.DecodeDecompressString(wf.Status.CompressedNodes) + if err != nil { + return errors.InternalWrapError(err) + } + err = json.Unmarshal([]byte(nodeContent), &wf.Status.Nodes) + if err != nil { + return err + } + wf.Status.CompressedNodes = "" + } + return nil +} diff --git a/workflow/util/util_test.go b/workflow/util/util_test.go index 550affc03af2..c99544de2569 100644 --- a/workflow/util/util_test.go +++ b/workflow/util/util_test.go @@ -3,7 +3,7 @@ package util import ( "testing" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" "github.com/stretchr/testify/assert" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) diff --git a/workflow/validate/lint.go b/workflow/validate/lint.go index e14414e6cce6..4545c99be70a 100644 --- a/workflow/validate/lint.go +++ b/workflow/validate/lint.go @@ -5,11 +5,12 @@ import ( "os" "path/filepath" - "github.com/argoproj/argo/errors" - "github.com/argoproj/argo/pkg/apis/workflow" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/workflow/common" "github.com/argoproj/pkg/json" + + "github.com/cyrusbiotechnology/argo/errors" + "github.com/cyrusbiotechnology/argo/pkg/apis/workflow" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/common" ) // LintWorkflowDir validates all workflow manifests in a directory. Ignores non-workflow manifests @@ -60,7 +61,7 @@ func LintWorkflowFile(filePath string, strict bool) error { return errors.Errorf(errors.CodeBadRequest, "%s failed to parse: %v", filePath, err) } for _, wf := range workflows { - err = ValidateWorkflow(&wf, true) + err = ValidateWorkflow(&wf, ValidateOpts{Lint: true}) if err != nil { return errors.Errorf(errors.CodeBadRequest, "%s: %s", filePath, err.Error()) } diff --git a/workflow/validate/validate.go b/workflow/validate/validate.go index 83d813b7a485..8b73e1b98dfb 100644 --- a/workflow/validate/validate.go +++ b/workflow/validate/validate.go @@ -8,15 +8,31 @@ import ( "regexp" "strings" - "github.com/argoproj/argo/errors" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/workflow/common" "github.com/valyala/fasttemplate" apivalidation "k8s.io/apimachinery/pkg/util/validation" + + "github.com/cyrusbiotechnology/argo/errors" + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/workflow/artifacts/hdfs" + "github.com/cyrusbiotechnology/argo/workflow/common" ) +// ValidateOpts provides options when linting +type ValidateOpts struct { + // Lint indicates if this is performing validation in the context of linting. If true, will + // skip some validations which is permissible during linting but not submission (e.g. missing + // input parameters to the workflow) + Lint bool + // ContainerRuntimeExecutor will trigger additional validation checks specific to different + // types of executors. For example, the inability of kubelet/k8s executors to copy artifacts + // out of the base image layer. If unspecified, will use docker executor validation + ContainerRuntimeExecutor string +} + // wfValidationCtx is the context for validating a workflow spec type wfValidationCtx struct { + ValidateOpts + wf *wfv1.Workflow // globalParams keeps track of variables which are available the global // scope and can be referenced from anywhere. @@ -35,21 +51,19 @@ const ( anyItemMagicValue = "item.*" ) -// ValidateWorkflow accepts a workflow and performs validation against it. If lint is specified as -// true, will skip some validations which is permissible during linting but not submission -func ValidateWorkflow(wf *wfv1.Workflow, lint ...bool) error { +// ValidateWorkflow accepts a workflow and performs validation against it. +func ValidateWorkflow(wf *wfv1.Workflow, opts ValidateOpts) error { ctx := wfValidationCtx{ + ValidateOpts: opts, wf: wf, globalParams: make(map[string]string), results: make(map[string]bool), } - linting := len(lint) > 0 && lint[0] - err := validateWorkflowFieldNames(wf.Spec.Templates) if err != nil { return errors.Errorf(errors.CodeBadRequest, "spec.templates%s", err.Error()) } - if linting { + if ctx.Lint { // if we are just linting we don't care if spec.arguments.parameters.XXX doesn't have an // explicit value. workflows without a default value is a desired use case err = validateArgumentsFieldNames("spec.arguments.", wf.Spec.Arguments) @@ -65,6 +79,14 @@ func ValidateWorkflow(wf *wfv1.Workflow, lint ...bool) error { for _, param := range ctx.wf.Spec.Arguments.Parameters { ctx.globalParams["workflow.parameters."+param.Name] = placeholderValue } + + for k := range ctx.wf.ObjectMeta.Annotations { + ctx.globalParams["workflow.annotations."+k] = placeholderValue + } + for k := range ctx.wf.ObjectMeta.Labels { + ctx.globalParams["workflow.labels."+k] = placeholderValue + } + if ctx.wf.Spec.Entrypoint == "" { return errors.New(errors.CodeBadRequest, "spec.entrypoint is required") } @@ -110,6 +132,19 @@ func (ctx *wfValidationCtx) validateTemplate(tmpl *wfv1.Template, args wfv1.Argu localParams[common.LocalVarPodName] = placeholderValue scope[common.LocalVarPodName] = placeholderValue } + if tmpl.IsLeaf() { + for _, art := range tmpl.Outputs.Artifacts { + if art.Path != "" { + scope[fmt.Sprintf("outputs.artifacts.%s.path", art.Name)] = true + } + } + for _, param := range tmpl.Outputs.Parameters { + if param.ValueFrom != nil && param.ValueFrom.Path != "" { + scope[fmt.Sprintf("outputs.parameters.%s.path", param.Name)] = true + } + } + } + _, err = common.ProcessArgs(tmpl, args, ctx.globalParams, localParams, true) if err != nil { return errors.Errorf(errors.CodeBadRequest, "templates.%s %s", tmpl.Name, err) @@ -132,6 +167,16 @@ func (ctx *wfValidationCtx) validateTemplate(tmpl *wfv1.Template, args wfv1.Argu if err != nil { return err } + err = ctx.validateBaseImageOutputs(tmpl) + if err != nil { + return err + } + if tmpl.ArchiveLocation != nil { + err = validateArtifactLocation("templates.archiveLocation", *tmpl.ArchiveLocation) + if err != nil { + return err + } + } return nil } @@ -174,6 +219,7 @@ func validateInputs(tmpl *wfv1.Template) (map[string]interface{}, error) { if art.Path == "" { return nil, errors.Errorf(errors.CodeBadRequest, "templates.%s.%s.path not specified", tmpl.Name, artRef) } + scope[fmt.Sprintf("inputs.artifacts.%s.path", art.Name)] = true } else { if art.Path != "" { return nil, errors.Errorf(errors.CodeBadRequest, "templates.%s.%s.path only valid in container/script templates", tmpl.Name, artRef) @@ -183,7 +229,7 @@ func validateInputs(tmpl *wfv1.Template) (map[string]interface{}, error) { return nil, errors.Errorf(errors.CodeBadRequest, "templates.%s.%s.from not valid in inputs", tmpl.Name, artRef) } errPrefix := fmt.Sprintf("templates.%s.%s", tmpl.Name, artRef) - err = validateArtifactLocation(errPrefix, art) + err = validateArtifactLocation(errPrefix, art.ArtifactLocation) if err != nil { return nil, err } @@ -191,12 +237,18 @@ func validateInputs(tmpl *wfv1.Template) (map[string]interface{}, error) { return scope, nil } -func validateArtifactLocation(errPrefix string, art wfv1.Artifact) error { +func validateArtifactLocation(errPrefix string, art wfv1.ArtifactLocation) error { if art.Git != nil { if art.Git.Repo == "" { return errors.Errorf(errors.CodeBadRequest, "%s.git.repo is required", errPrefix) } } + if art.HDFS != nil { + err := hdfs.ValidateArtifact(fmt.Sprintf("%s.hdfs", errPrefix), art.HDFS) + if err != nil { + return err + } + } // TODO: validate other artifact locations return nil } @@ -208,6 +260,11 @@ func resolveAllVariables(scope map[string]interface{}, tmplStr string) error { fstTmpl := fasttemplate.New(tmplStr, "{{", "}}") fstTmpl.ExecuteFuncString(func(w io.Writer, tag string) (int, error) { + + // Skip the custom variable references + if !checkValidWorkflowVariablePrefix(tag) { + return 0, nil + } _, ok := scope[tag] if !ok && unresolvedErr == nil { if (tag == "item" || strings.HasPrefix(tag, "item.")) && allowAllItemRefs { @@ -223,6 +280,16 @@ func resolveAllVariables(scope map[string]interface{}, tmplStr string) error { return unresolvedErr } +// checkValidWorkflowVariablePrefix is a helper methood check variable starts workflow root elements +func checkValidWorkflowVariablePrefix(tag string) bool { + for _, rootTag := range common.GlobalVarValidWorkflowVariablePrefix { + if strings.HasPrefix(tag, rootTag) { + return true + } + } + return false +} + func validateNonLeaf(tmpl *wfv1.Template) error { if tmpl.ActiveDeadlineSeconds != nil { return errors.Errorf(errors.CodeBadRequest, "templates.%s.activeDeadlineSeconds is only valid for leaf templates", tmpl.Name) @@ -501,6 +568,51 @@ func validateOutputs(scope map[string]interface{}, tmpl *wfv1.Template) error { return nil } +// validateBaseImageOutputs detects if the template contains an output from +func (ctx *wfValidationCtx) validateBaseImageOutputs(tmpl *wfv1.Template) error { + switch ctx.ContainerRuntimeExecutor { + case "", common.ContainerRuntimeExecutorDocker: + // docker executor supports all modes of artifact outputs + case common.ContainerRuntimeExecutorPNS: + // pns supports copying from the base image, but only if there is no volume mount underneath it + errMsg := "pns executor does not support outputs from base image layer with volume mounts. must use emptyDir" + for _, out := range tmpl.Outputs.Artifacts { + if common.FindOverlappingVolume(tmpl, out.Path) == nil { + // output is in the base image layer. need to verify there are no volume mounts under it + if tmpl.Container != nil { + for _, volMnt := range tmpl.Container.VolumeMounts { + if strings.HasPrefix(volMnt.MountPath, out.Path+"/") { + return errors.Errorf(errors.CodeBadRequest, "templates.%s.outputs.artifacts.%s: %s", tmpl.Name, out.Name, errMsg) + } + } + + } + if tmpl.Script != nil { + for _, volMnt := range tmpl.Container.VolumeMounts { + if strings.HasPrefix(volMnt.MountPath, out.Path+"/") { + return errors.Errorf(errors.CodeBadRequest, "templates.%s.outputs.artifacts.%s: %s", tmpl.Name, out.Name, errMsg) + } + } + } + } + } + case common.ContainerRuntimeExecutorK8sAPI, common.ContainerRuntimeExecutorKubelet: + // for kubelet/k8s fail validation if we detect artifact is copied from base image layer + errMsg := fmt.Sprintf("%s executor does not support outputs from base image layer. must use emptyDir", ctx.ContainerRuntimeExecutor) + for _, out := range tmpl.Outputs.Artifacts { + if common.FindOverlappingVolume(tmpl, out.Path) == nil { + return errors.Errorf(errors.CodeBadRequest, "templates.%s.outputs.artifacts.%s: %s", tmpl.Name, out.Name, errMsg) + } + } + for _, out := range tmpl.Outputs.Parameters { + if out.ValueFrom != nil && common.FindOverlappingVolume(tmpl, out.ValueFrom.Path) == nil { + return errors.Errorf(errors.CodeBadRequest, "templates.%s.outputs.parameters.%s: %s", tmpl.Name, out.Name, errMsg) + } + } + } + return nil +} + // validateOutputParameter verifies that only one of valueFrom is defined in an output func validateOutputParameter(paramRef string, param *wfv1.Parameter) error { if param.ValueFrom == nil { diff --git a/workflow/validate/validate_test.go b/workflow/validate/validate_test.go index 886dfd126bbf..e3787c4ac3ba 100644 --- a/workflow/validate/validate_test.go +++ b/workflow/validate/validate_test.go @@ -3,17 +3,19 @@ package validate import ( "testing" - wfv1 "github.com/argoproj/argo/pkg/apis/workflow/v1alpha1" - "github.com/argoproj/argo/test" "github.com/ghodss/yaml" "github.com/stretchr/testify/assert" + + wfv1 "github.com/cyrusbiotechnology/argo/pkg/apis/workflow/v1alpha1" + "github.com/cyrusbiotechnology/argo/test" + "github.com/cyrusbiotechnology/argo/workflow/common" ) // validate is a test helper to accept YAML as a string and return // its validation result. func validate(yamlStr string) error { wf := unmarshalWf(yamlStr) - return ValidateWorkflow(wf) + return ValidateWorkflow(wf, ValidateOpts{}) } func unmarshalWf(yamlStr string) *wfv1.Workflow { @@ -163,6 +165,76 @@ func TestUnresolved(t *testing.T) { } } +var ioArtifactPaths = ` +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: artifact-path-placeholders- +spec: + entrypoint: head-lines + arguments: + parameters: + - name: lines-count + value: 3 + artifacts: + - name: text + raw: + data: | + 1 + 2 + 3 + 4 + 5 + templates: + - name: head-lines + inputs: + parameters: + - name: lines-count + artifacts: + - name: text + path: /inputs/text/data + outputs: + parameters: + - name: actual-lines-count + valueFrom: + path: /outputs/actual-lines-count/data + artifacts: + - name: text + path: /outputs/text/data + container: + image: busybox + command: [sh, -c, 'head -n {{inputs.parameters.lines-count}} <"{{inputs.artifacts.text.path}}" | tee "{{outputs.artifacts.text.path}}" | wc -l > "{{outputs.parameters.actual-lines-count.path}}"'] +` + +func TestResolveIOArtifactPathPlaceholders(t *testing.T) { + err := validate(ioArtifactPaths) + assert.Nil(t, err) +} + +var outputParameterPath = ` +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: get-current-date- +spec: + entrypoint: get-current-date + templates: + - name: get-current-date + outputs: + parameters: + - name: current-date + valueFrom: + path: /tmp/current-date + container: + image: busybox + command: [sh, -c, 'date > {{outputs.parameters.current-date.path}}'] +` + +func TestResolveOutputParameterPathPlaceholder(t *testing.T) { + err := validate(outputParameterPath) + assert.Nil(t, err) +} + var stepOutputReferences = ` apiVersion: argoproj.io/v1alpha1 kind: Workflow @@ -736,7 +808,7 @@ spec: - name: argo-source path: /src git: - repo: https://github.com/argoproj/argo.git + repo: https://github.com/cyrusbiotechnology/argo.git container: image: alpine:latest command: [sh, -c] @@ -749,13 +821,13 @@ spec: func TestVolumeMountArtifactPathCollision(t *testing.T) { // ensure we detect and reject path collisions wf := unmarshalWf(volumeMountArtifactPathCollision) - err := ValidateWorkflow(wf) + err := ValidateWorkflow(wf, ValidateOpts{}) if assert.NotNil(t, err) { assert.Contains(t, err.Error(), "already mounted") } // tweak the mount path and validation should now be successful wf.Spec.Templates[0].Container.VolumeMounts[0].MountPath = "/differentpath" - err = ValidateWorkflow(wf) + err = ValidateWorkflow(wf, ValidateOpts{}) assert.Nil(t, err) } @@ -1041,7 +1113,7 @@ func TestPodNameVariable(t *testing.T) { } func TestGlobalParamWithVariable(t *testing.T) { - err := ValidateWorkflow(test.LoadE2EWorkflow("functional/global-outputs-variable.yaml")) + err := ValidateWorkflow(test.LoadE2EWorkflow("functional/global-outputs-variable.yaml"), ValidateOpts{}) assert.Nil(t, err) } @@ -1066,9 +1138,9 @@ spec: // TestSpecArgumentNoValue we allow parameters to have no value at the spec level during linting func TestSpecArgumentNoValue(t *testing.T) { wf := unmarshalWf(specArgumentNoValue) - err := ValidateWorkflow(wf, true) + err := ValidateWorkflow(wf, ValidateOpts{Lint: true}) assert.Nil(t, err) - err = ValidateWorkflow(wf) + err = ValidateWorkflow(wf, ValidateOpts{}) assert.NotNil(t, err) } @@ -1103,7 +1175,7 @@ spec: // TestSpecArgumentSnakeCase we allow parameter and artifact names to be snake case func TestSpecArgumentSnakeCase(t *testing.T) { wf := unmarshalWf(specArgumentSnakeCase) - err := ValidateWorkflow(wf, true) + err := ValidateWorkflow(wf, ValidateOpts{Lint: true}) assert.Nil(t, err) } @@ -1133,12 +1205,154 @@ spec: container: image: alpine:latest command: [echo, "{{inputs.parameters.num}}"] - ` // TestSpecBadSequenceCountAndEnd verifies both count and end cannot be defined func TestSpecBadSequenceCountAndEnd(t *testing.T) { wf := unmarshalWf(specBadSequenceCountAndEnd) - err := ValidateWorkflow(wf, true) + err := ValidateWorkflow(wf, ValidateOpts{Lint: true}) assert.Error(t, err) } + +var customVariableInput = ` +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: hello-world- +spec: + entrypoint: whalesay + templates: + - name: whalesay + container: + image: docker/whalesay:{{user.username}} +` + +// TestCustomTemplatVariable verifies custom template variable +func TestCustomTemplatVariable(t *testing.T) { + wf := unmarshalWf(customVariableInput) + err := ValidateWorkflow(wf, ValidateOpts{Lint: true}) + assert.Equal(t, err, nil) +} + +var baseImageOutputArtifact = ` +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: base-image-out-art- +spec: + entrypoint: base-image-out-art + templates: + - name: base-image-out-art + container: + image: alpine:latest + command: [echo, hello] + outputs: + artifacts: + - name: tmp + path: /tmp +` + +var baseImageOutputParameter = ` +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: base-image-out-art- +spec: + entrypoint: base-image-out-art + templates: + - name: base-image-out-art + container: + image: alpine:latest + command: [echo, hello] + outputs: + parameters: + - name: tmp + valueFrom: + path: /tmp/file +` + +var volumeMountOutputArtifact = ` +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: base-image-out-art- +spec: + entrypoint: base-image-out-art + volumes: + - name: workdir + emptyDir: {} + templates: + - name: base-image-out-art + container: + image: alpine:latest + command: [echo, hello] + volumeMounts: + - name: workdir + mountPath: /mnt/vol + outputs: + artifacts: + - name: workdir + path: /mnt/vol +` + +var baseImageDirWithEmptyDirOutputArtifact = ` +apiVersion: argoproj.io/v1alpha1 +kind: Workflow +metadata: + generateName: base-image-out-art- +spec: + entrypoint: base-image-out-art + volumes: + - name: workdir + emptyDir: {} + templates: + - name: base-image-out-art + container: + image: alpine:latest + command: [echo, hello] + volumeMounts: + - name: workdir + mountPath: /mnt/vol + outputs: + artifacts: + - name: workdir + path: /mnt +` + +// TestBaseImageOutputVerify verifies we error when we detect the condition when the container +// runtime executor doesn't support output artifacts from a base image layer, and fails validation +func TestBaseImageOutputVerify(t *testing.T) { + wfBaseOutArt := unmarshalWf(baseImageOutputArtifact) + wfBaseOutParam := unmarshalWf(baseImageOutputParameter) + wfEmptyDirOutArt := unmarshalWf(volumeMountOutputArtifact) + wfBaseWithEmptyDirOutArt := unmarshalWf(baseImageDirWithEmptyDirOutputArtifact) + var err error + + for _, executor := range []string{common.ContainerRuntimeExecutorK8sAPI, common.ContainerRuntimeExecutorKubelet, common.ContainerRuntimeExecutorPNS, common.ContainerRuntimeExecutorDocker, ""} { + switch executor { + case common.ContainerRuntimeExecutorK8sAPI, common.ContainerRuntimeExecutorKubelet: + err = ValidateWorkflow(wfBaseOutArt, ValidateOpts{ContainerRuntimeExecutor: executor}) + assert.Error(t, err) + err = ValidateWorkflow(wfBaseOutParam, ValidateOpts{ContainerRuntimeExecutor: executor}) + assert.Error(t, err) + err = ValidateWorkflow(wfBaseWithEmptyDirOutArt, ValidateOpts{ContainerRuntimeExecutor: executor}) + assert.Error(t, err) + case common.ContainerRuntimeExecutorPNS: + err = ValidateWorkflow(wfBaseOutArt, ValidateOpts{ContainerRuntimeExecutor: executor}) + assert.NoError(t, err) + err = ValidateWorkflow(wfBaseOutParam, ValidateOpts{ContainerRuntimeExecutor: executor}) + assert.NoError(t, err) + err = ValidateWorkflow(wfBaseWithEmptyDirOutArt, ValidateOpts{ContainerRuntimeExecutor: executor}) + assert.Error(t, err) + case common.ContainerRuntimeExecutorDocker, "": + err = ValidateWorkflow(wfBaseOutArt, ValidateOpts{ContainerRuntimeExecutor: executor}) + assert.NoError(t, err) + err = ValidateWorkflow(wfBaseOutParam, ValidateOpts{ContainerRuntimeExecutor: executor}) + assert.NoError(t, err) + err = ValidateWorkflow(wfBaseWithEmptyDirOutArt, ValidateOpts{ContainerRuntimeExecutor: executor}) + assert.NoError(t, err) + } + err = ValidateWorkflow(wfEmptyDirOutArt, ValidateOpts{ContainerRuntimeExecutor: executor}) + assert.NoError(t, err) + } +}