Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm release metadata computation cause helm_release to be updated in-place #1236

Closed
sivanov-nuodb opened this issue Aug 25, 2023 · 10 comments · Fixed by #1246
Closed

Helm release metadata computation cause helm_release to be updated in-place #1236

sivanov-nuodb opened this issue Aug 25, 2023 · 10 comments · Fixed by #1246
Labels

Comments

@sivanov-nuodb
Copy link

sivanov-nuodb commented Aug 25, 2023

Terraform, Provider, Kubernetes and Helm Versions

Terraform version: 1.5.5
Provider version: v2.11.0
Kubernetes version: v1.24.16-eks-2d98532

Affected Resource(s)

  • helm_release

Terraform Configuration Files

variable "cp_chart_repository" {
  type        = string
  description = "The Helm charts repository"
}

variable "cp_chart_version" {
  type        = string
  description = "The Helm charts version"
  default     = "2.0.0"
}

variable "cp_namespace_name" {
  type        = string
  description = "The name of the namespace where CP will be installed"
  default     = "cp-system"
}

locals {
  crd_name      = "cp-crd"
}

resource "helm_release" "crd" {
  name       = local.crd_name
  namespace  = var.cp_namespace_name
  repository = var.cp_chart_repository
  chart      = local.crd_name
  version    = var.cp_chart_version

  create_namespace = true
  atomic           = true
}

Debug Output

Show Output
2023-08-25T10:57:54.950+0300 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to refreshState for helm_release.crd
2023-08-25T10:57:54.950+0300 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: writing state object for helm_release.crd
2023-08-25T10:57:54.950+0300 [TRACE] Re-validating config for "helm_release.crd"
2023-08-25T10:57:54.950+0300 [TRACE] GRPCProvider: ValidateResourceConfig
2023-08-25T10:57:54.951+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Received request: tf_proto_version=5.3 tf_req_id=5e62752d-e3a3-cda9-f245-af388c24dd5b tf_rpc=ValidateResourceTypeConfig @caller=github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/tf5server/server.go:679 tf_resource_type=helm_release @module=sdk.proto tf_provider_addr=provider timestamp=2023-08-25T10:57:54.951+0300
2023-08-25T10:57:54.951+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Sending request downstream: tf_provider_addr=provider tf_req_id=5e62752d-e3a3-cda9-f245-af388c24dd5b tf_rpc=ValidateResourceTypeConfig @caller=github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/internal/tf5serverlogging/downstream_request.go:17 tf_proto_version=5.3 @module=sdk.proto tf_resource_type=helm_release timestamp=2023-08-25T10:57:54.951+0300
2023-08-25T10:57:54.951+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Calling downstream: tf_rpc=ValidateResourceTypeConfig @caller=github.com/hashicorp/terraform-plugin-sdk/v2@v2.26.1/helper/schema/grpc_provider.go:245 @module=sdk.helper_schema tf_provider_addr=provider tf_req_id=5e62752d-e3a3-cda9-f245-af388c24dd5b tf_resource_type=helm_release timestamp=2023-08-25T10:57:54.951+0300
2023-08-25T10:57:54.951+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Called downstream: @caller=github.com/hashicorp/terraform-plugin-sdk/v2@v2.26.1/helper/schema/grpc_provider.go:247 @module=sdk.helper_schema tf_req_id=5e62752d-e3a3-cda9-f245-af388c24dd5b tf_resource_type=helm_release tf_rpc=ValidateResourceTypeConfig tf_provider_addr=provider timestamp=2023-08-25T10:57:54.951+0300
2023-08-25T10:57:54.951+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Received downstream response: diagnostic_error_count=0 diagnostic_warning_count=0 tf_resource_type=helm_release tf_req_duration_ms=0 tf_req_id=5e62752d-e3a3-cda9-f245-af388c24dd5b tf_rpc=ValidateResourceTypeConfig @caller=github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/internal/tf5serverlogging/downstream_request.go:37 @module=sdk.proto tf_proto_version=5.3 tf_provider_addr=provider timestamp=2023-08-25T10:57:54.951+0300
2023-08-25T10:57:54.951+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Served request: tf_resource_type=helm_release tf_rpc=ValidateResourceTypeConfig tf_proto_version=5.3 tf_req_id=5e62752d-e3a3-cda9-f245-af388c24dd5b tf_provider_addr=provider @caller=github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/tf5server/server.go:699 @module=sdk.proto timestamp=2023-08-25T10:57:54.951+0300
2023-08-25T10:57:54.951+0300 [TRACE] GRPCProvider: PlanResourceChange
2023-08-25T10:57:54.952+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Received request: @caller=github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/tf5server/server.go:770 @module=sdk.proto tf_resource_type=helm_release tf_proto_version=5.3 tf_provider_addr=provider tf_req_id=191f8861-a3d9-c3fe-b52f-735886547f50 tf_rpc=PlanResourceChange timestamp=2023-08-25T10:57:54.952+0300
2023-08-25T10:57:54.952+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Sending request downstream: @module=sdk.proto tf_provider_addr=provider tf_rpc=PlanResourceChange @caller=github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/internal/tf5serverlogging/downstream_request.go:17 tf_proto_version=5.3 tf_req_id=191f8861-a3d9-c3fe-b52f-735886547f50 tf_resource_type=helm_release timestamp=2023-08-25T10:57:54.952+0300
2023-08-25T10:57:54.953+0300 [INFO]  provider.terraform-provider-helm_v2.11.0_x5: 2023/08/25 10:57:54 [DEBUG] A computed value with the empty string as the new value and a non-empty old value was found. Interpreting the empty string as "unset" to align with legacy behavior.: timestamp=2023-08-25T10:57:54.953+0300
2023-08-25T10:57:54.954+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Calling downstream: @caller=github.com/hashicorp/terraform-plugin-sdk/v2@v2.26.1/helper/schema/schema.go:698 tf_req_id=191f8861-a3d9-c3fe-b52f-735886547f50 tf_rpc=PlanResourceChange @module=sdk.helper_schema tf_provider_addr=provider tf_resource_type=helm_release timestamp=2023-08-25T10:57:54.954+0300
2023-08-25T10:57:54.954+0300 [INFO]  provider.terraform-provider-helm_v2.11.0_x5: 2023/08/25 10:57:54 [DEBUG] [resourceDiff: cp-crd] Start: timestamp=2023-08-25T10:57:54.954+0300
2023-08-25T10:57:55.645+0300 [INFO]  provider.terraform-provider-helm_v2.11.0_x5: 2023/08/25 10:57:55 [DEBUG] [INFO] GetHelmConfiguration start: timestamp=2023-08-25T10:57:55.645+0300
2023-08-25T10:57:55.645+0300 [INFO]  provider.terraform-provider-helm_v2.11.0_x5: 2023/08/25 10:57:55 [INFO] Successfully initialized kubernetes config: timestamp=2023-08-25T10:57:55.645+0300
2023-08-25T10:57:56.664+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Called downstream: @module=sdk.helper_schema tf_req_id=395ce83c-f24e-46cb-0b06-b4a5a8c03a2b tf_resource_type=helm_release tf_rpc=ReadResource @caller=github.com/hashicorp/terraform-plugin-sdk/v2@v2.26.1/helper/schema/resource.go:1016 tf_provider_addr=provider timestamp=2023-08-25T10:57:56.664+0300
2023-08-25T10:57:56.664+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Called downstream: tf_provider_addr=provider @caller=github.com/hashicorp/terraform-plugin-sdk/v2@v2.26.1/helper/schema/schema.go:700 @module=sdk.helper_schema tf_req_id=191f8861-a3d9-c3fe-b52f-735886547f50 tf_resource_type=helm_release tf_rpc=PlanResourceChange timestamp=2023-08-25T10:57:56.663+0300
2023-08-25T10:57:56.664+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Received downstream response: diagnostic_error_count=0 diagnostic_warning_count=0 tf_proto_version=5.3 @caller=github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/internal/tf5serverlogging/downstream_request.go:37 @module=sdk.proto tf_provider_addr=provider tf_req_duration_ms=4438 tf_req_id=395ce83c-f24e-46cb-0b06-b4a5a8c03a2b tf_rpc=ReadResource tf_resource_type=helm_release timestamp=2023-08-25T10:57:56.664+0300
2023-08-25T10:57:56.664+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Received downstream response: diagnostic_error_count=0 tf_provider_addr=provider tf_rpc=PlanResourceChange tf_req_id=191f8861-a3d9-c3fe-b52f-735886547f50 @caller=github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/internal/tf5serverlogging/downstream_request.go:37 @module=sdk.proto diagnostic_warning_count=0 tf_proto_version=5.3 tf_req_duration_ms=1712 tf_resource_type=helm_release timestamp=2023-08-25T10:57:56.664+0300
2023-08-25T10:57:56.664+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Served request: tf_proto_version=5.3 tf_req_id=395ce83c-f24e-46cb-0b06-b4a5a8c03a2b tf_resource_type=helm_release tf_rpc=ReadResource @caller=github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/tf5server/server.go:761 @module=sdk.proto tf_provider_addr=provider timestamp=2023-08-25T10:57:56.664+0300
2023-08-25T10:57:56.664+0300 [TRACE] provider.terraform-provider-helm_v2.11.0_x5: Served request: tf_proto_version=5.3 tf_resource_type=helm_release @caller=github.com/hashicorp/terraform-plugin-go@v0.14.3/tfprotov5/tf5server/server.go:796 @module=sdk.proto tf_provider_addr=provider tf_req_id=191f8861-a3d9-c3fe-b52f-735886547f50 tf_rpc=PlanResourceChange timestamp=2023-08-25T10:57:56.664+0300
2023-08-25T10:57:56.665+0300 [WARN]  Provider "registry.terraform.io/hashicorp/helm" produced an invalid plan for helm_release.crd, but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .force_update: planned value cty.False for a non-computed attribute
      - .reuse_values: planned value cty.False for a non-computed attribute
      - .dependency_update: planned value cty.False for a non-computed attribute
      - .disable_openapi_validation: planned value cty.False for a non-computed attribute
      - .verify: planned value cty.False for a non-computed attribute
      - .disable_crd_hooks: planned value cty.False for a non-computed attribute
      - .disable_webhooks: planned value cty.False for a non-computed attribute
      - .lint: planned value cty.False for a non-computed attribute
      - .cleanup_on_fail: planned value cty.False for a non-computed attribute
      - .render_subchart_notes: planned value cty.True for a non-computed attribute
      - .replace: planned value cty.False for a non-computed attribute
      - .wait: planned value cty.True for a non-computed attribute
      - .wait_for_jobs: planned value cty.False for a non-computed attribute
      - .max_history: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .skip_crds: planned value cty.False for a non-computed attribute
      - .timeout: planned value cty.NumberIntVal(300) for a non-computed attribute
      - .pass_credentials: planned value cty.False for a non-computed attribute
      - .recreate_pods: planned value cty.False for a non-computed attribute
      - .reset_values: planned value cty.False for a non-computed attribute
2023-08-25T10:57:56.665+0300 [TRACE] writeChange: recorded Update change for helm_release.crd
2023-08-25T10:57:56.665+0300 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to workingState for helm_release.crd
2023-08-25T10:57:56.665+0300 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: writing state object for helm_release.crd
2023-08-25T10:57:56.665+0300 [TRACE] vertex "helm_release.crd": visit complete

Panic Output

N/A

Steps to Reproduce

The helm_release resource is marked for in-place update even though the input variables don't change.

  1. Deploy the release usingterraform apply for the first time
  2. Repeat terraform apply without changing the configuration

Downgrading from 2.11.0 (or 2.10.1) to 2.9.0 causes the issue to go away.

Expected Behavior

Terraform should report no changes in the plan.

Actual Behavior

Instead, the resource is updated in-place due to metadata change.

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # helm_release.crd will be updated in-place
  ~ resource "helm_release" "crd" {
        id                         = "cp-crd"
      ~ metadata                   = [
          - {
              - app_version = "2.0.0"
              - chart       = "cp-crd"
              - name        = "cp-crd"
              - namespace   = "cp-system"
              - revision    = 5
              - values      = jsonencode({})
              - version     = "2.0.0"
            },
        ] -> (known after apply)
        name                       = "cp-crd"
        # (26 unchanged attributes hidden)
    }

Important Factoids

The Helm chart is fetched from AWS ECR configured as OCI repository.

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@sheneska
Copy link
Contributor

sheneska commented Aug 30, 2023

Hi @sivanov-nuodb, I tried to reproduce this issue but I was not able to. Could you please provide us with the output of helm -n cp-system status cp-crd ?

@sivanov-nuodb
Copy link
Author

sivanov-nuodb commented Aug 31, 2023

Hi @sivanov-nuodb, I tried to reproduce this issue but I was not able to. Could you please provide us with the output of helm -n cp-system status cp-crd ?

NAME: cp-crd
LAST DEPLOYED: Wed Aug 30 21:41:32 2023
NAMESPACE: cp-system
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **

After some more investigation, it turned out that the problem is that an empty cp_chart_version is supplied with the resource. The cp_chart_version module variable was set incorrectly which caused the default value (e.g. empty string) to be used (there is a Terragrunt layer on top).

Is it correct for the provider to mark the resource for an in-place update if the latest Helm chart digest/version matches the one installed in the cluster?

@jrhouston
Copy link
Contributor

Is it correct for the provider to mark the resource for an in-place update if the latest Helm chart digest/version matches the one installed in the cluster?

Yeah this is not the correct behaviour – I was able to reproduce this just by explicitly setting the version attribute to a empty string. The issue isn't present when we simply omit the version attribute.

We need to add some logic to the custom diff to ignore the case where version = "", here:

if !useChartVersion(d.Get("chart").(string), d.Get("repository").(string)) {
if d.HasChange("version") {
// only recompute metadata if the version actually changes
// chart versioning is not consistent and some will add
// a `v` prefix to the chart version after installation
old, new := d.GetChange("version")
oldVersion := strings.TrimPrefix(old.(string), "v")
newVersion := strings.TrimPrefix(new.(string), "v")
if oldVersion != newVersion {
d.SetNewComputed("metadata")
}
}
}

@jgournet
Copy link

jgournet commented Sep 1, 2023

Thank you @jrhouston ! I'm eagerly waiting for such a PR, as there is a "related" bug on the workaround:

Description:
as a workaround for this issue, I implemented:

resource "helm_release" "this" {
  count      = local.count
  name      = var.helm_release_name
  version   = var.helm_release_version == "" ? null : var.helm_release_version
[...]

turns out: when passing an explicit "null": terraform just stops upgrading helm charts when there is a newer version than the one currently installed.

=> fixing the issue from this ticket will allow me to remove this buggy workaround ;) (but then again, this might be worth a ticket on its own ?)

@jgournet
Copy link

jgournet commented Sep 6, 2023

Thank you @jrhouston for fixing this !
May I ask when this is scheduled to be released ?
I'm eagerly waiting for this fix :)

@YawataNoKami
Copy link

Thank for the fix, hope for a release soon
Can't wait to get this fix :)

@meysam81
Copy link

Is there any move on this? There's another [similar] issue on #1150
With the version set, even when no newer chart version, helm_release always tries to replace the release. Consequently, if maxUnavailable is not set appropriately, this can cause downtime!

@lorenzoiuri
Copy link

I'm using version v2.12.1 of the provider but I'm still encountering the issue.

@YawataNoKami
Copy link

I'm using version v2.12.1 of the provider but I'm still encountering the issue.

Yes i tried also but the behavior remain...

@Noel-Jones
Copy link

I too came back to this and tried to upgrade to 2.12.1, after reading the above I tried changing my version (for sonarqube) from

version = "10.1.0"

to

version = "10.1.0+628"

This has resolved the drift. I found the version using helm list and subsequently found the supported versions on artifact hub. Sorry if this seems specific to sonarqube but I imagine it may help others to fix the drift.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants