Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update terraform cloudposse/s3-bucket/aws to v4 #40

Merged
merged 2 commits into from
Mar 3, 2024

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Sep 22, 2023

Mend Renovate

This PR contains the following updates:

Package Type Update Change
cloudposse/s3-bucket/aws (source) module major 3.1.3 -> 4.0.1

Release Notes

cloudposse/terraform-aws-s3-bucket (cloudposse/s3-bucket/aws)

v4.0.1

Compare Source

🐛 Bug Fixes
Fix bug in setting dynamic `encryption_configuration` value @​LawrenceWarren (#​206)
what
  • When trying to create an S3 bucket, the following error is encountered:
Error: Invalid dynamic for_each value

  on .terraform/main.tf line 225, in resource "aws_s3_bucket_replication_configuration" "default":
 225:           for_each = try(compact(concat(
 226:             [try(rule.value.destination.encryption_configuration.replica_kms_key_id, "")],
 227:             [try(rule.value.destination.replica_kms_key_id, "")]
 228:           ))[0], [])
    ├────────────────
    │ rule.value.destination.encryption_configuration is null
    │ rule.value.destination.replica_kms_key_id is "arn:aws:kms:my-region:my-account-id:my-key-alias"

Cannot use a string value in for_each. An iterable collection is required.
  • This is caused in my case by having s3_replication_rules.destination.encryption_configuration.replica_kms_key_id set.
why
  • There is a bug when trying to create an S3 bucket, which causes an error that stops the bucket being created

    • Basically, there are two attributes that do the same thing (for backwards compatability)
      • s3_replication_rules.destination.encryption_configuration.replica_kms_key_id (newer)
      • s3_replication_rules.destination.replica_kms_key_id (older)
    • There is logic to:
      • A) use the newer of these two attributes
      • B) fall back to the older of the attributes if it is set and the newer is not
      • C) fall back to an empty array if nothing is set
    • There is a bug in steps A/B, where by selecting one or the other, we end up with the string value, and not an iterable
    • The simplest solution, which I have tested successfully on existing buckets, is to wrap the output of that logic in a list
  • This error is easily replicable by trying compact(concat([try("string", "")], [try("string", "")]))[0] in the Terraform console, which is a simplified version of the existing logic used above

  • The table below demonstrates the possible values of the existing code - you can see the outputs for value 2, value 3, and value 4 are not lists:

Key Value 1 Value 2 Value 3 Value 4
newer null "string1" null "string1"
older null null "string2" "string2"
output [] "string1" "string2" "string1"

v4.0.0

Compare Source

Bug fixes and enhancements combined into a single breaking release @​aknysh (#​202)

Breaking Changes

Terraform version 1.3.0 or later is now required.

policy input removed

The deprecated policy input has been removed. Use source_policy_documents instead.

Convert from

policy = data.aws_iam_policy_document.log_delivery.json

to

source_policy_documents = [data.aws_iam_policy_document.log_delivery.json]

Do not use list modifiers like sort, compact, or distinct on the list, or it will trigger an Error: Invalid count argument. The length of the list must be known at plan time.

Logging configuration converted to list

To fix #​182, the logging input has been converted to a list. If you have a logging configuration, simply surround it with brackets.

Replication rules brought into alignment with Terraform resource

Previously, the s3_replication_rules input had some deviations from the aws_s3_bucket_replication_configuration Terraform resource. Via the use of optional attributes, the input now closely matches the resource while providing backward compatibility, with a few exceptions.

  • Replication source_selection_criteria.sse_kms_encrypted_objects was documented as an object with one member, enabled, of type bool. However, it only worked when set to the string "Enabled". It has been replaced with the resource's choice of status of type String.
  • Previously, Replication Time Control could not be set directly. It was implicitly enabled by enabling Replication Metrics. We preserve that behavior even though we now add a configuration block for replication_time. To enable Metrics without Replication Time Control, you must set replication_time.status = "Disabled".

These are not changes, just continued deviations from the resources:

  • existing_object_replication cannot be set.
  • token to allow replication to be enabled on an Object Lock-enabled bucket cannot be set.

what

  • Remove local local.source_policy_documents and deprecated variable policy (because of that, pump the module to a major version)
  • Convert lifecycle_configuration_rules and s3_replication_rules from loosely typed objects to fully typed objects with optional attributes.
  • Use local bucket_id variable
  • Remove comments suppressing Bridgecrew rules
  • Update tests to Golang 1.20

why

  • The number of policy documents needs to be known at plan time. Default value of policy was empty, meaning it had to be removed based on content, which would not be known at plan time if the policy input was being generated.
  • Closes #​167, supersedes and closes #​163, and generally makes these inputs easier to deal with, since they now have type checking and partial defaults, meaning the inputs can be much smaller.
  • Incorporates and closes #​197. Thank you @​nikpivkin
  • Suppressing Bridgecrew rules Cloud Posse does not like should be done via external configuration so that users of this module can have the option of having those rules enforced.
  • Security and bug fixes

explanation

Any list manipulation functions should not be used in count since it can lead to the error:

│ Error: Invalid count argument

│   on ./modules/s3_bucket/main.tf line 462, in resource "aws_s3_bucket_policy" "default":
│  462:   count      = local.enabled && (var.allow_ssl_requests_only || var.allow_encrypted_uploads_only || length(var.s3_replication_source_roles) > 0 || length(var.privileged_principal_arns) > 0 || length(local.source_policy_documents) > 0) ? 1 : 0

│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to
│ first apply only the resources that the count depends on.

Using the local like this

source_policy_documents = var.policy != "" && var.policy != null ? concat([var.policy], var.source_policy_documents) : var.source_policy_documents

would not work either if var.policy depends on apply-time resources from other TF modules.

General rules:

  • When using for_each, the map keys have to be known at plan time (the map values are not required to be know at plan time)

  • When using count, the length of the list must be know at plan time, the items inside the list are not. That does not mean that the list must be static with the length known in advance, the list can be dynamic and come from a remote state or data sources which Terraform evaluates first during plan, it just can’t come from other resources (which are only known after apply)

  • When using count, no list manipulating functions can be used in count - it will lead to the The "count" value depends on resource attributes that cannot be determined until apply error in some cases


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Mend Renovate. View repository job log here.

@renovate renovate bot requested review from a team as code owners September 22, 2023 20:40
@renovate renovate bot requested review from jamengual and florian0410 September 22, 2023 20:40
@renovate renovate bot added the auto-update This PR was automatically generated label Sep 22, 2023
@renovate renovate bot force-pushed the renovate/cloudposse-s3-bucket-aws-4.x branch 3 times, most recently from e029a9f to 37fdd2f Compare March 3, 2024 13:14
@hans-d
Copy link

hans-d commented Mar 3, 2024

/terratest

@hans-d hans-d added wip Work in Progress: Not ready for final review or merge and removed wip Work in Progress: Not ready for final review or merge labels Mar 3, 2024
@renovate renovate bot force-pushed the renovate/cloudposse-s3-bucket-aws-4.x branch from f724008 to 9d4183a Compare March 3, 2024 13:44
@renovate renovate bot force-pushed the renovate/cloudposse-s3-bucket-aws-4.x branch from 4a8013f to 7325d3e Compare March 3, 2024 14:13
@hans-d
Copy link

hans-d commented Mar 3, 2024

/terratest

@hans-d hans-d added wip Work in Progress: Not ready for final review or merge and removed wip Work in Progress: Not ready for final review or merge labels Mar 3, 2024
Copy link
Contributor Author

renovate bot commented Mar 3, 2024

Edited/Blocked Notification

Renovate will not automatically rebase this PR, because it does not recognize the last commit author and assumes somebody else may have edited the PR.

You can manually request rebase by checking the rebase/retry box above.

Warning: custom changes will be lost.

@hans-d hans-d merged commit 2575641 into main Mar 3, 2024
18 checks passed
@hans-d hans-d deleted the renovate/cloudposse-s3-bucket-aws-4.x branch March 3, 2024 16:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-update This PR was automatically generated
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant