-
Notifications
You must be signed in to change notification settings - Fork 379
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Output CRDs separate from manifests #791
Comments
I can implement this, but my guess is that it really is not a big feature to implement. |
@phillebaba You are probably right that we could make the interface for the Your proposal to have a separate attribute for the rendered CRDs does make sense as terraform-provider-helm/helm/data_template.go Lines 541 to 550 in 82fe05d
I think we'll need to actually parse the YAML to pick out which ones are CRDs, though. In the mean time I figured out how to work around this using a super gnarly resource "null_resource" "test" {
for_each = toset(flatten([
for k, v in data.helm_template.opa_gatekeeper.manifests :
[for vvv in [for vv in split("---\n", v) : vv if vv != ""] : vvv
if yamldecode(vvv).kind == "CustomResourceDefinition"]]))
provisioner "local-exec" {
command = "echo \"${each.value}\""
}
} |
@jrhouston so I have done some work and I think I can solve two issues in one PR. I stumbled across #782 and realized I could fix that at the same time. I read through most of the Helm code that does the manifest rendering and came to the conclusion that it treats templates and CRDs very differently. My solution was to basically skip having the client include the CRDs and instead two it myself, this is possible as the Helm Chart resource includes the list of the CRDs which can be easily be rendered in the same format as a Helm template output. I just need to get some tests done and I will be able to create the PR. I agree with you that updating the key would have its benefits. Especially because changing the filename but not the content would still result in a state change in Terraform. Doing this with API group and kind however can also be tricky as they may also change, while the actual content does not. Resources may go from a v1beta1 to a v1beta2 or move to a different API group without changing the spec, which could force a new resource. We have had some issues with this in the Flux Terraform provider, and I know that the kubectl provider has some issues with this as it includes the api version in the key wen parsing a multidoc yaml file. Creating a standard Kubernetes resource identifier for Terraform that does not have these problems would actually be a pretty helpful thing for a lot of Kubernetes related providers out there. |
@phillebaba by accident I stumbled upon this thread as I searched for ways to render a helm template with the helm provider and applying it with I'm even struggling with applying the whole manifest, and I think applying only the CRDs might have the same issue:
yields:
|
I guess this is fixed by the PR #1050 which has been released yesterday in release v2.9.0. I tested it using the example code below: data "helm_template" "kube_prometheus_stack" {
name = "kube-prometheus-stack"
namespace = "kube-prometheus-stack"
repository = "https://prometheus-community.github.io/helm-charts"
chart = "kube-prometheus-stack"
version = "45.1.0"
include_crds = true
}
locals {
kube_prometheus_stack_crds_name_to_manifest_map = {
for crd in data.helm_template.kube_prometheus_stack.crds :
yamldecode(crd).metadata.name => crd
}
}
resource "kubectl_manifest" "crds" {
for_each = local.kube_prometheus_stack_crds_name_to_manifest_map
yaml_body = each.value
force_conflicts = true
# Fix 'metadata.annotations: Too long: must have at most 262144 bytes'
server_side_apply = true
} |
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you! |
Description
Currently the
helm_template
datasource will pass the rendered manifests through the two outputsmanifest
andmanifests
. The difference being that one is a multipart yaml while the other is a map containing each individual file. With the optional variableinclude_crds
the contents of the CRD directory in the helm chart will be added to the outputs, this variable is false by default.Helm has made an active choice to not manage CRD upgrades if the CRD already exists in the cluster. Instead it is up to the end user to manage this, which is not optimal with large amounts of clusters. Instead this should be possible to automate, preferably with existing maintained providers.
One option is that the
helm_template
datasource adds a new output calledcrds
which is a map with all of the CRDs only. This would make it possible to make use of other provider such as terraform-provider-kubectl to apply the manifests before the Helm chart is installed.Potential Terraform Configuration
References
Community Note
The text was updated successfully, but these errors were encountered: