-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use 'tpl' function for owner value. #346
Use 'tpl' function for owner value. #346
Conversation
Utilized the tpl function to evaluate the owner string as a template inside the Helm template. Signed-off-by: apriebeAVSystem <a.priebe+git@avsystem.com>
What's your use case? |
Hi, @itay-grudev I want to assign value from my root values.yaml into cloudnative-pg. |
Is this property really the only deal braker for you? |
Yes, it's really important for me to use that tpl function |
I would like to make one note though: I deploy it with Terraform or ArgoCD. |
You misunderstood. I'm asking if this is the only property preventing you from doing so? |
In the company, we create our own charts to facilitate their use by internal developers (adding custom features dedicated for the Company Only). At the same time, we want to use "upstream" resources to help develop the main charts. |
Hi, @itay-grudev Your approval would help us continue utilizing this Helm chart effectively within our company. |
@itay-grudev this is standard practice to make a "wrapper chart" in which a company makes a chart like CNPG a dependency and this allows customization for automated pipelines and argocd. The issue is the chart you depend on needs to be templatable and customizable where it makes sense. Bitnami charts are a GOLD STANDARD of doing this. https://github.com/bitnami/charts (actually some are very horrible while others are amazing...) We hope to contribute much more to this project. We track our own values.yaml or helm overrides in pipelines and our own changelog. We do this for MANY other program charts like elasticsearch, grafana, jaeger, kafaka, mongodb, postgresql, redis, promethus, and more. If you plan to provide helm charts (which is much appreciated) then allowing some things to change is good. We have found though that charts with operators that enforce specs via mutation hooks appear to be very limited charts. You essentially must pass in a full spec override which is not great. Here is a good example of a highly customizable spec. https://github.com/bitnami/charts/blob/main/bitnami/postgresql-ha/templates/postgresql/statefulset.yaml. CNPG could offer similar spec customization where it aligns with the operator schema. Generally a company will "wrap" a chart so we can generate our own configmaps or other templates to pass into the system or chart on deployment. We have a need to internally host our own repository copies to for security purposes. We also deliver multiple different environments including air gapped envs, where we MUST be able to easily override and modify our configs. We don't always have access to argoCD or kustomize or other methods to modify and deploy as some networks are restricted. Here is an example wrapper we use for CNPG cluster at the moment. apiVersion: v2
name: postgresql-cnpg-cluster
# renovate: datasource=helm depName=cluster repository=https://cloudnative-pg.github.io/charts
version: 0.0.9-2
description: Deploys and manages a CloudNativePG cluster and its associated resources.
dependencies:
- name: cluster
version: 0.0.9
repository: https://cloudnative-pg.github.io/charts
sources:
- https://github.com/cloudnative-pg/
- https://github.com/cloudnative-pg/charts/tree/main/charts/cluster
- https://cloudnative-pg.io/
- https://github.com/cloudnative-pg/pgbouncer-containers/pkgs/container/pgbouncer
- https://github.com/cloudnative-pg/postgres-containers/pkgs/container/postgresql
Here is the values.yaml we currently use. cluster:
nameOverride: "postgres"
pooler:
enabled: true
instances: 1
template:
spec:
containers:
- name: pgbouncer
# https://github.com/cloudnative-pg/pgbouncer-containers/pkgs/container/pgbouncer
image: "cdn.internalrepo.global.company.com/ext.ghcr.io/cloudnative-pg/pgbouncer:1.23.0"
resources:
requests:
cpu: "0.1"
memory: 100Mi
limits:
memory: 500Mi
initContainers:
- name: bootstrap-controller
resources:
requests:
cpu: "0.1"
memory: 100Mi
limits:
memory: 500Mi
cluster:
instances: 3
postgresql:
max_connections: "250"
# pgaudit parameters being present automatically enables pgaudit logging in the cluster
pgaudit.log: "all, -misc"
pgaudit.log_catalog: "off"
pgaudit.log_parameter: "on"
pgaudit.log_relation: "on"
client_min_messages: "notice"
log_line_prefix: "< %m %a %u %d %p %c %s %r >"
log_checkpoints: "on"
log_duration: "off"
log_error_verbosity: "default"
log_hostname: "off"
log_lock_waits: "on"
log_statement: "none"
log_min_messages: "DEBUG1"
log_min_error_statement: "ERROR"
log_min_duration_statement: "-1"
log_timezone: "UTC"
# this is not usable until the cluster yaml gets updated in the CNPG github to template pg_ident based on values.yaml
#pg_ident:
#- 'cert-users /^(.*)\.users\.test\.us\.com$ \1'
#- 'cert-users postgres.containers.test.us.com postgres'
# container versions https://github.com/cloudnative-pg/postgres-containers/pkgs/container/postgresql
imageName: "cdn.internalrepo.global.company.com/ext.ghcr.io/cloudnative-pg/postgresql:14.12"
resources:
limits:
memory: 8Gi
requests:
cpu: 2000m
memory: 8Gi This is then used in an even higher wrapper chart for our overall infrastructure deployment where we pass in more overrides. Here is an example of one of those overrides global:
infrastructureServiceDiscovery:
postgresql:
name: "postgres-rw"
postgresql-backup:
enabled: false
postgresql-ha:
enabled: false
postgresql-cnpg-cluster:
enabled: true
cluster:
pooler:
enabled: false
poolMode: session
parameters:
max_client_conn: "1000"
default_pool_size: "300"
cluster:
initdb:
database: test
owner: test
# https://github.com/cloudnative-pg/cloudnative-pg/blob/631bb20c500a64564773db0f98fc66704c6d0f54/docs/src/samples/cluster-example-secret.yaml
# secret file must be of type "kubernetes.io/basic-auth"
secret:
name: infrastructure-postgresql-secrets-auth We also do not expect to ever use the "CNPG kube plugin" as we are required to be mostly hands off in certain environments. |
@itay-grudev any progress on reviewing this PR? I am patiently waiting for some dialogue on the PR to decide if we at AVSystem should use and contribute to this helm chart or if we should scrap the idea and develop similar thing in house. Lack of any discussion sadly brings us closer to scraping idea of collaborating which IMHO would be disadvantageous for both parties. |
I am still thinking about it, but I am leaning in the direction of adopting it. I am worried if I go this route we'll have to patch every single option and allow everything to be evaluated at runtime. The other thing I am worried about is that recovery cannot be initiated by the chart if it is a sub-chart. An orchestration tool that sits above Helm can handle this, but not Helm itself. And adopting the database as a subchart will make recovery operations harder should it ever happen. |
I agree, that is a valid concern and a line needs to be drawn somewhere. However current solution is quite limiting in terms of using the chart in production and/or GitOps environments mostly due to how Secrets are handled by the helm chart itself and IMHO right now is the best time to address this since the chart is still in its infancy. For me, using
I am curious on why recovery couldn't be initiated? Looking at https://github.com/cloudnative-pg/charts/blob/main/charts/cluster/examples/recovery-backup.yaml, I don't see why this couldn't be done from a wrapped chart as below. Is there some limitation in a particular template that you can point me to? cnpg-cluster:
mode: recovery
recovery:
method: backup
backupName: "database-clustermarket-database-daily-backup-1683244800"
cluster:
instances: 1
backups:
provider: s3
s3:
region: "eu-west-1"
bucket: "db-backups"
path: "/v1-restore"
accessKey: "AWS_S3_ACCESS_KEY"
secretKey: "AWS_S3_SECRET_KEY"
scheduledBackups:
- name: daily-backup # Daily at midnight
schedule: "0 0 0 * * *" # Daily at midnight
backupOwnerReference: self
retentionPolicy: "30d" Or even better something like: cnpg-cluster:
mode: recovery
recovery:
method: backup
backupName: "database-clustermarket-database-daily-backup-1683244800"
cluster:
instances: 1
backups:
provider: s3
secret:
name: "secret-provided-externally"
scheduledBackups:
- name: daily-backup # Daily at midnight
schedule: "0 0 0 * * *" # Daily at midnight
backupOwnerReference: self
retentionPolicy: "30d" |
@paulfantom The cluster |
Signed-off-by: apriebeAVSystem <a.priebe+git@avsystem.com>
This is limitation that is spawned from this particular helm chart (which most likely trickled down from the controller) and I don't see how this affects subchart usage. Especially when parent chart has full access to the subchart values. 🤔 Regardless, thanks for going forward with this. Now we can proceed on our side :) Also, I hope we'll be able to contribute more in the future ;) |
…cloudnative-pg#346) Utilized the tpl function to evaluate the owner string as a template inside the Helm template. --------- Signed-off-by: apriebeAVSystem <a.priebe+git@avsystem.com> Co-authored-by: Itay Grudev <itay.grudev@essentim.com> Signed-off-by: Zack Stevens <zack.st7@gmail.com>
I know this is closed, but I wanted to slap some inspiration of some services that offer good operator/helm-chart customization and deployment. https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack |
…cloudnative-pg#346) Utilized the tpl function to evaluate the owner string as a template inside the Helm template. --------- Signed-off-by: apriebeAVSystem <a.priebe+git@avsystem.com> Co-authored-by: Itay Grudev <itay.grudev@essentim.com>
Utilized the tpl function to evaluate the owner string as a template inside the Helm template.