Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Single item in set #211

Closed
jmak123 opened this issue Jun 10, 2020 · 13 comments
Closed

Single item in set #211

jmak123 opened this issue Jun 10, 2020 · 13 comments
Assignees
Labels
bug Used to mark issues with provider's incorrect behavior

Comments

@jmak123
Copy link

jmak123 commented Jun 10, 2020

Not sure if this is more a general terraform question or specific to this provider because of its implementation.

When a set has only one value and it uses interpolation, the compiled state (to be applied) has a list with trailing comma after each value in the set. When deployed though, the state file records a list without any trailing comma for the last value. Thus, even with no change in config, rerunning apply will comes with warning of forced replacement of the list because the compiled version always has trailing comma, but the remote state doesn't.

config:

resource "snowflake_warehouse_grant" "fivetran_wh_usage" {
  privilege = "USAGE"
  warehouse_name = "${snowflake_warehouse.fivetran_wh.name}"
  roles = ["${snowflake_role.fivetran_role.name}"]
}

compiled state to be applied:

  # snowflake_warehouse_grant.fivetran_wh_usage must be replaced
-/+ resource "snowflake_warehouse_grant" "fivetran_wh_usage" {
      ~ id             = "FIVETRAN_WH|||USAGE" -> (known after apply)
        privilege      = "USAGE"
      ~ roles          = [ # forces replacement
          + "FIVETRAN_ROLE",
        ]
        warehouse_name = "FIVETRAN_WH"
    }

but the remote state always stays

{
      "mode": "managed",
      "type": "snowflake_warehouse_grant",
      "name": "fivetran_wh_usage",
      "provider": "provider.snowflake",
      "instances": [
        {
          "schema_version": 0,
          "attributes": {
            "id": "FIVETRAN_WH|||USAGE",
            "privilege": "USAGE",
            "roles": [
              "FIVETRAN_ROLE"
            ],
            "warehouse_name": "FIVETRAN_WH"
          },
          "private": "bnVsbA==",
          "dependencies": [
            "snowflake_role.fivetran_role",
            "snowflake_warehouse.fivetran_wh"
          ]
        }
      ]
    },

Is it a bug and is there a way to force terraform to use either with or without trailing comma in the last set value across local and remote states?

@ryanking
Copy link
Contributor

@jmak123 I think this is a general terraform question. The terraform configuration language, HCL allows trailing commas at the end of lists.

The comma you see in the "compiled state to be applied" is a result of terraform rendering the plan in a form that looks like HCL configuration.

It seems that what is happening here is that the FIVETRAN_ROLE needs to be added. After the apply, is the plan empty?

@jmak123
Copy link
Author

jmak123 commented Jun 12, 2020

@ryanking After I applied the changes above, I reran plan and get the same changes to be applied again. Namely,

  # snowflake_warehouse_grant.fivetran_wh_usage must be replaced
-/+ resource "snowflake_warehouse_grant" "fivetran_wh_usage" {
      ~ id             = "FIVETRAN_WH|||USAGE" -> (known after apply)
        privilege      = "USAGE"
      ~ roles          = [ # forces replacement
          + "FIVETRAN_ROLE",
        ]
        warehouse_name = "FIVETRAN_WH"
    }

Terraform keeps asking to apply this change no matter how many times I have applied this. My suspicion is that there is a trailing comma here but not when it gets written into state file, hence there's always a difference between the plan output and what's in tfstate.

I wonder if anyone else has experienced this? If it's an HCL issue then I should have found more of these issues in forums.

@nutboltz
Copy link

I have the same issue! This is only a problem on warehouses that have different kind of grants on them though.

  # snowflake_warehouse_grant.load_wh_modify must be replaced
-/+ resource "snowflake_warehouse_grant" "load_wh_modify" {
      ~ id             = "LOAD_WH|||MODIFY" -> (known after apply)
        privilege      = "MODIFY"
      ~ roles          = [ # forces replacement
          + "PC_FIVETRAN_ROLE",
        ]
        warehouse_name = "LOAD_WH"
    }

  # snowflake_warehouse_grant.load_wh_monitor must be replaced
-/+ resource "snowflake_warehouse_grant" "load_wh_monitor" {
      ~ id             = "LOAD_WH|||MONITOR" -> (known after apply)
        privilege      = "MONITOR"
      ~ roles          = [ # forces replacement
          + "PC_FIVETRAN_ROLE",
        ]
        warehouse_name = "LOAD_WH"
    }

  # snowflake_warehouse_grant.load_wh_operate must be replaced
-/+ resource "snowflake_warehouse_grant" "load_wh_operate" {
      ~ id             = "LOAD_WH|||OPERATE" -> (known after apply)
        privilege      = "OPERATE"
      ~ roles          = [ # forces replacement
          + "PC_FIVETRAN_ROLE",
        ]
        warehouse_name = "LOAD_WH"
    }

  # snowflake_warehouse_grant.load_wh_usage must be replaced
-/+ resource "snowflake_warehouse_grant" "load_wh_usage" {
      ~ id             = "LOAD_WH|||USAGE" -> (known after apply)
        privilege      = "USAGE"
      ~ roles          = [ # forces replacement
          + "PC_FIVETRAN_ROLE",
            "SFDC_ROLE",
        ]
        warehouse_name = "LOAD_WH"
    }

I have applied the plan successfully but the next time I run terraform plan the change still shows up,

@ryanking ryanking added feature-request Used to mark issues with provider's missing functionalities needs-triage bug Used to mark issues with provider's incorrect behavior and removed feature-request Used to mark issues with provider's missing functionalities needs-triage labels Jul 29, 2020
@igungor
Copy link
Contributor

igungor commented Aug 7, 2020

I'm experiencing the same problem with Terraform 0.13.0, Snowflake provider 0.13.2. For the sake of clarity, I used the -target flag to demonstrate the problem. Problem is still there if that flag is not used.

$ tf plan -target='module.warehouse_bi.snowflake_warehouse_grant.warehouse_grant_monitor[0]' -out=tfplan

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

module.role_data_admin.snowflake_role.role: Refreshing state... [id=DATA_ADMIN]
module.warehouse_bi.snowflake_warehouse.warehouse: Refreshing state... [id=BI]
module.warehouse_bi.snowflake_warehouse_grant.warehouse_grant_monitor[0]: Refreshing state... [id=BI|||MONITOR]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.warehouse_bi.snowflake_warehouse_grant.warehouse_grant_monitor[0] will be updated in-place
  ~ resource "snowflake_warehouse_grant" "warehouse_grant_monitor" {
        id             = "BI|||MONITOR"
        privilege      = "MONITOR"
      ~ roles          = [
          + "DATA_ADMIN",
        ]
        warehouse_name = "BI"
    }

Plan: 0 to add, 1 to change, 0 to destroy.
...

$ tf apply -target='module.warehouse_bi.snowflake_warehouse_grant.warehouse_grant_monitor[0]' tfplan

module.warehouse_bi.snowflake_warehouse_grant.warehouse_grant_monitor[0]: Modifying... [id=BI|||MONITOR]
module.warehouse_bi.snowflake_warehouse_grant.warehouse_grant_monitor[0]: Modifications complete after 2s [id=BI|||MONITOR]

...

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

$ tf plan -target='module.warehouse_bi.snowflake_warehouse_grant.warehouse_grant_monitor[0]' -out=tfplan

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

module.role_data_admin.snowflake_role.role: Refreshing state... [id=DATA_ADMIN]
module.warehouse_bi.snowflake_warehouse.warehouse: Refreshing state... [id=BI]
module.warehouse_bi.snowflake_warehouse_grant.warehouse_grant_monitor[0]: Refreshing state... [id=BI|||MONITOR]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.warehouse_bi.snowflake_warehouse_grant.warehouse_grant_monitor[0] will be updated in-place
  ~ resource "snowflake_warehouse_grant" "warehouse_grant_monitor" {
        id             = "BI|||MONITOR"
        privilege      = "MONITOR"
      ~ roles          = [
          + "DATA_ADMIN",
        ]
        warehouse_name = "BI"
    }

Plan: 0 to add, 1 to change, 0 to destroy.

...

@ryanking ryanking self-assigned this Aug 7, 2020
@igungor
Copy link
Contributor

igungor commented Aug 26, 2020

Found the problem.

If a role is given all the privileges for a given resource(except all and ownership), provider intrerprets the role like it has the "ALL" Snowflake grant, which is actually a feature. The problem is, provider "forgets" the actual privilege, makes Terraform "think" the privilege needs to be granted.

For example, for a warehouse, if a role has all the grants available for a warehouse, USAGE grant will be dismissed.

At least that is the problem for my case. The easiest way to reproduce this problem might be for warehouse_grants, as per the OP's example. Give a role "usage, operate, monitor, modify" grants to a warehouse. Then apply the changes, and run terraform plan.

@slocke716
Copy link

Any movement on this? We are having substantial problems with this

@robbruce
Copy link
Contributor

robbruce commented Sep 7, 2020

We face a side issue in the way that the grant resources have been setup, in that you cannot declare the same privilege on the same resource in 2 different locations. This most common use-case is the use of snowflake_role_grants and setup of a role hierarchy across multiple terraform workspaces (or terragrunt).

Initial creation is just fine, as it just adds the role, but re-apply removes the other role.

For example

base/main.tf

resource "snowflake_role" "this" {
  name = "BASE_ROLE"
}

subrole1/main.tf

resource "snowflake_role" "this" {
  name = "SUB_ROLE_1"
}

resource "snowflake_role_grants" "this" {
  # or some other means of reading base
  role_name = data.terraform_remote_state.base.outputs.snowflake_role_name
  roles = toset([snowflake_role.this.name])
}

subrole2/main.tf

resource "snowflake_role" "this" {
  name = "SUB_ROLE_2"
}

resource "snowflake_role_grants" "this" {
  # or some other means of reading base
  role_name = data.terraform_remote_state.base.outputs.snowflake_role_name
  roles = toset([snowflake_role.this.name])
}

Calling terraform apply on subrole1 or subrole2 directories multiple times results in the opposing role to be removed.

@ryanking - any objections if we were to issue a pull request that accepts a single role (or user for a snowflake_role_grants) for all of the grant(s) resources? I completely get the logic if a list is provided then the resource is managing all grants to the set provided; alternatively new resources that allow management individually (much in the same as aws_network_acl_rule)

@louis-vines
Copy link

Any thoughts on this @robbruce? I for one would really like the interface to behave the way you are describing here!

@ajwootto
Copy link

I'm more of a fan of the idea of new resources that specifically take a single role etc.

The idea of changing behaviour based on whether the input variable is a string vs. a list seems like it would be quite prone to accidents.

Having singular-named resources that accept a string seems a bit clearer, so you would have both:
snowflake_role_grants
and
snowflake_role_grant

I also like the way the AWS provider handles this kind of relationship, like in the above mentioned aws_network_acl_rule.
They let you choose between specifying each rule as a separate resource, or define all the rules in one place inline on the parent resource (in this case, the aws_network_acl resource). I can see this working well for this case as well, where you have a "grants" field on the parent resources.

For the roles case, this would look like:

resource "snowflake_role" "this" {
  name = "SOME_ROLE"
  role_grants = ["SOME_ROLE2"]
  user_grants = ["SOME_USER2"]
}

Or you have individually defined grant resources, one for each user and role being granted.

I think the AWS-style approach is most in-line with what I've seen in other TF providers, but it's obviously much more backwards-incompatible than just creating a new set of singular resources.

@raeray
Copy link

raeray commented Apr 6, 2022

is there a workaround for this issue?

@kaufmannie
Copy link

Struggling with the same issue on our side. I believe this inbounded issue is the same problem as well. As other users mentioned, it appears there is a trailing comma generated during the plan, but not existing in the state file. So terraform keeps asking keeps asking to apply the same change even though it has already been applied. Curious if there is any work around.

@KPGR
Copy link

KPGR commented Mar 14, 2023

Having the same issue. Any updates on this?

@sfc-gh-asawicki
Copy link
Collaborator

We are closing this issue as part of a cleanup described in announcement. If you believe that the issue is still valid in v0.89.0, please open a new ticket.

@sfc-gh-asawicki sfc-gh-asawicki closed this as not planned Won't fix, can't repro, duplicate, stale Apr 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Used to mark issues with provider's incorrect behavior
Projects
None yet
Development

No branches or pull requests