Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(docs): document how to set imagePullSecrets in baseJobTemplate #446

Merged
merged 6 commits into from
Feb 11, 2025
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 36 additions & 3 deletions charts/prefect-worker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -241,17 +241,50 @@ prefect work-pool get-default-base-job-template --type kubernetes > base-job-tem
helm install prefect-worker prefect/prefect-worker -f values.yaml --set-file worker.config.baseJobTemplate.configuration=base-job-template.json
```

#### Using the Base Job Template

The worker uses the [base job template](https://docs.prefect.io/v3/deploy/infrastructure-concepts/work-pools#base-job-template)
to create the Kubernetes job that executes your workflow. The base job template configuration can be modified by setting
`worker.config.baseJobTemplate.configuration`. For example, to provide image pull secrets for the container running your flow,
provide the following values:

```yaml
worker:
config:
baseJobTemplate:
configuration: |
{
"job_configuration": {
"job_manifest": {
"spec": {
"template": {
"spec": {
"imagePullSecrets": [
{
"name": "my-pull-secret"
}
]
}
}
Copy link
Contributor Author

@mitchnielsen mitchnielsen Feb 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

$ k explain deployment.spec.template.spec.imagePullSecrets
GROUP:      apps
KIND:       Deployment
VERSION:    v1

FIELD: imagePullSecrets <[]LocalObjectReference>


DESCRIPTION:
    ImagePullSecrets is an optional list of references to secrets in the same
    namespace to use for pulling any of the images used by this PodSpec. If
    specified, these secrets will be passed to individual puller implementations
    for them to use. More info:
    https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod
    LocalObjectReference contains enough information to let you locate the
    referenced object inside the same namespace.

FIELDS:
  name  <string>
    Name of the referent. This field is effectively required, but due to
    backwards compatibility is allowed to be empty. Instances of this type with
    an empty value here are almost certainly wrong. More info:
    https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names

}
}
}
}
```
You can see the entire base job template in the UI by navigating to `Account settings` > `Work Pools` > your work pool > three-dot menu
in the top right corner > `Edit` > `Base Job Template` section > `Advanced` tab.

#### Updating the Base Job Template

If a base job template is set through Helm (via either `.Values.worker.config.baseJobTemplate.configuration` or `.Values.worker.config.baseJobTemplate.existingConfigMapName`), we'll run an optional `initContainer` that will sync the template configuration to the work pool named in `.Values.worker.config.workPool`.
If a base job template is set through Helm (via either `worker.config.baseJobTemplate.configuration` or `worker.config.baseJobTemplate.existingConfigMapName`), we'll run an optional `initContainer` that will sync the template configuration to the work pool named in `worker.config.workPool`.

Any time the base job template is updated, the subsequent `initContainer` run will run `prefect work-pool update <work-pool-name> --base-job-template <template-json>` and sync this template to the API.

Please note that configuring the template via `baseJobTemplate.existingConfigMapName` will require a manual restart of the `prefect-worker` Deployment in order to kick off the `initContainer` - alternatively, you can use a tool like [reloader](https://github.com/stakater/Reloader) to automatically restart an associated Deployment. However, configuring the template via `baseJobTemplate.configuration` value will automatically roll the Deployment on any update.

## Troubleshooting

### Setting `.Values.worker.clusterUid`
### Setting `worker.clusterUid`

This chart attempts to generate a unique identifier for the cluster it is installing the worker on to use as metadata for your runs. Since Kubernetes [does not provide a "cluster ID" API](https://github.com/kubernetes/kubernetes/issues/44954), this chart will do so by [reading the `kube-system` namespace and parsing the immutable UID](https://github.com/PrefectHQ/prefect-helm/blob/main/charts/prefect-worker/templates/_helpers.tpl#L94-L105). [This mimics the functionality in the `prefect-kubernetes` library](https://github.com/PrefectHQ/prefect/blob/5f5427c410cd04505d7b2c701e2003f856044178/src/integrations/prefect-kubernetes/prefect_kubernetes/worker.py#L835-L859).

Expand All @@ -264,7 +297,7 @@ This chart does not offer a built-in way to assign these roles, as it does not m

> HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"namespaces \"kube-system\" is forbidden: User \"system:serviceaccount:prefect:prefect-worker\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"kube-system\"","reason":"Forbidden","details":{"name":"kube-system","kind":"namespaces"},"code":403}

In many cases, these role additions may be entirely infeasible due to overall access limitations. As an alternative, this chart offers a hard-coded override via the `.Values.worker.clusterUid` value.
In many cases, these role additions may be entirely infeasible due to overall access limitations. As an alternative, this chart offers a hard-coded override via the `worker.clusterUid` value.

Set this value to a user-provided unique ID - this bypasses the `kube-system` namespace lookup and utilizes your provided value as the cluster ID instead. Be sure to set this value consistently across your Prefect deployments that interact with the same cluster

Expand Down
39 changes: 36 additions & 3 deletions charts/prefect-worker/README.md.gotmpl
Original file line number Diff line number Diff line change
Expand Up @@ -241,17 +241,50 @@ prefect work-pool get-default-base-job-template --type kubernetes > base-job-tem
helm install prefect-worker prefect/prefect-worker -f values.yaml --set-file worker.config.baseJobTemplate.configuration=base-job-template.json
```

#### Using the Base Job Template

The worker uses the [base job template](https://docs.prefect.io/v3/deploy/infrastructure-concepts/work-pools#base-job-template)
to create the Kubernetes job that executes your workflow. The base job template configuration can be modified by setting
`worker.config.baseJobTemplate.configuration`. For example, to provide image pull secrets for the container running your flow,
provide the following values:

```yaml
worker:
config:
baseJobTemplate:
configuration: |
{
"job_configuration": {
"job_manifest": {
"spec": {
"template": {
"spec": {
"imagePullSecrets": [
{
"name": "my-pull-secret"
}
]
}
}
}
}
}
}
```
You can see the entire base job template in the UI by navigating to `Account settings` > `Work Pools` > your work pool > three-dot menu
in the top right corner > `Edit` > `Base Job Template` section > `Advanced` tab.

#### Updating the Base Job Template

If a base job template is set through Helm (via either `.Values.worker.config.baseJobTemplate.configuration` or `.Values.worker.config.baseJobTemplate.existingConfigMapName`), we'll run an optional `initContainer` that will sync the template configuration to the work pool named in `.Values.worker.config.workPool`.
If a base job template is set through Helm (via either `worker.config.baseJobTemplate.configuration` or `worker.config.baseJobTemplate.existingConfigMapName`), we'll run an optional `initContainer` that will sync the template configuration to the work pool named in `worker.config.workPool`.

Any time the base job template is updated, the subsequent `initContainer` run will run `prefect work-pool update <work-pool-name> --base-job-template <template-json>` and sync this template to the API.

Please note that configuring the template via `baseJobTemplate.existingConfigMapName` will require a manual restart of the `prefect-worker` Deployment in order to kick off the `initContainer` - alternatively, you can use a tool like [reloader](https://github.com/stakater/Reloader) to automatically restart an associated Deployment. However, configuring the template via `baseJobTemplate.configuration` value will automatically roll the Deployment on any update.

## Troubleshooting

### Setting `.Values.worker.clusterUid`
### Setting `worker.clusterUid`

This chart attempts to generate a unique identifier for the cluster it is installing the worker on to use as metadata for your runs. Since Kubernetes [does not provide a "cluster ID" API](https://github.com/kubernetes/kubernetes/issues/44954), this chart will do so by [reading the `kube-system` namespace and parsing the immutable UID](https://github.com/PrefectHQ/prefect-helm/blob/main/charts/prefect-worker/templates/_helpers.tpl#L94-L105). [This mimics the functionality in the `prefect-kubernetes` library](https://github.com/PrefectHQ/prefect/blob/5f5427c410cd04505d7b2c701e2003f856044178/src/integrations/prefect-kubernetes/prefect_kubernetes/worker.py#L835-L859).

Expand All @@ -264,7 +297,7 @@ This chart does not offer a built-in way to assign these roles, as it does not m

> HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"namespaces \"kube-system\" is forbidden: User \"system:serviceaccount:prefect:prefect-worker\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"kube-system\"","reason":"Forbidden","details":{"name":"kube-system","kind":"namespaces"},"code":403}

In many cases, these role additions may be entirely infeasible due to overall access limitations. As an alternative, this chart offers a hard-coded override via the `.Values.worker.clusterUid` value.
In many cases, these role additions may be entirely infeasible due to overall access limitations. As an alternative, this chart offers a hard-coded override via the `worker.clusterUid` value.

Set this value to a user-provided unique ID - this bypasses the `kube-system` namespace lookup and utilizes your provided value as the cluster ID instead. Be sure to set this value consistently across your Prefect deployments that interact with the same cluster

Expand Down