-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should we expect adaptability of KubeApps with respect to dynamic secret management? #10015
Comments
Hi there @ShreyShah977 . This question probably belongs on the Kubeapps repo itself at https://github.com/vmware-tanzu/kubeapps/ , rather than here (upstream Bitnami Kubeapps chart), but GitHub doesn't let me transfer an issue across organisations. So I'll try to answer here (or rather, ask questions here :) ) for now.
It's not clear to me if your Kustomize tooling does the first or second option of:
If you mean the first, then the only way I could see the behaviour you describe happening is if you have installed Kubeapps with some tooling that maintains/reconciles the Kubeapps deployment (such as Carvel or Flux, or for that matter ArgoCD which you mention). If you've done a plain Helm install of Kubeapps, then there should be no problem if your tooling updates a Kubeapps resource to point at a different secret (like any other resource, K8s would reconcile the change, creating new pods or whatever). You should be able to check whether ArgoCD or any other tooling is reverting that change to the referenced secret on a resource. I'm assuming this is not the case because you'd be well aware of it. If you mean the second, then yes, that's the expected behaviour, not of Kubeapps, but of any Kubernetes resource. If the spec for the resource hasn't changed (it still refers to the same secret and doesn't know that the secret has changed), there is nothing to reconcile. You can force the updated secret to be used by deleting pods and the deployment will recreate them mounting the updated secret, which is I think what you're referring to with ArgoCD's sync with prune option?
If you are seeing the pods cycled then yes, like any other k8s pod or resource, the Kubeapps ones should pick up the updated secret.... unless... there are persistent volumes involved (so the secret data is already stored on the persistent volume, which isn't updated by your sync/prune). Is the issue you're seeing related to the Postgresql install? If so, then this is probably the culprit. See the following links for more info: https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#persistence-volumes-pvs-retained-from-previous-releases In your case, you probably don't want to be deleting persistent volumes etc. The easiest solution may be to run the Kubeapps postgres without persistence (it's only used as a cache of package metadata, so persistence not really necessary). Anyway, if it's none of the above, please let me know which credential is not being updated. Thanks! |
Hi @absoludity, Thanks for your response and clarifying on the ability of KubeApps, as well as the general expected behaviour. To provide some context, we also believe that changing the secret value within an external management platform should not be happening on the fly. Just wanted to make sure & document the steps further to restart the application as a whole (via CI/CD controller) in order to propagate the correct auth if the value is changed. Appreciate your sentiments & cheers!
|
Name and Version
bitnami/kubeapps:8.0.8
What steps will reproduce the bug?
Hello!
I'd like to start with a bit of context, so currently we deploy KubeApps in a GCP cluster. We also have it managed by a CI/CD ui tool:
ArgoCD
. For secret management, we run vault on a separate cluster and then have a script to pull in required secrets, followed by another tool (Kustomize) to generate secret resources containing those variables.An interesting behaviour we documented is when we update the secret value within Vault, we expect the relevant pods, deployments (etc.) to become out of date. This is then normally resolved by us running the sync with prune command to purge out of date resources and regenerate specific instances.
However, it appears that certain KubeApps components do not recognize the configuration change and therefore still rely on previous secret resources.
Currently our only solution is to regenerate the namespace / restart the app itself in order to synchronize all resources and config settings to their respective states.
From the behaviour described above, I'd like to ask if this is expected behaviour from KubeApps or should we expect to be able to modify secrets, and then expect the configuration to adapt to the change respectively?
Thanks and appreciate your response.
Are you using any custom parameters or values?
We have existingSecrets pointed to the generated secret from Kustomize.
What is the expected behavior?
I'd like to get clarification if the expected behaviour of KubeApps includes the ability to adapt to secret changes within the configuration and then propagate the resource throughout the rest of the app.
What do you see instead?
Upon changing the secret, the rest of the app doesn't adapt to the change and still expects a resource that doesn't exist.
Additional information
No response
The text was updated successfully, but these errors were encountered: