Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to azurerm 3.0.0+, major improvements, security fixes, Terraform logic and spelling error fixes #164

Closed
wants to merge 10 commits into from

Conversation

rdvansloten
Copy link

Added

  • node_taints to default node pool

Changed

  • Upgraded to Terraform 1.1.9 from 0.13.0
  • Upgraded to azurerm provider 3.x from 2.x
  • Split prefix and dns_prefix, as they might not be the same
  • Separated Log Analytics Workspace and Log Analytics Solutions, for users who do not want both resources
  • Renamed most variables to be more consistent with resources
  • Required azurerm provider to >= 3.0.0 from ~> 2.46
  • Split tests into Kubenet and Azure CNI AKS clusters
  • Reduced test outputs to bare necessities

Fixed

  • Fixed typos and mistakes in variables descriptions
  • role_based_access_control has been deprecated in favor of azure_active_directory_role_based_access_control
  • addon_profile block has been deprecated in azurerm 3.0
  • Duplicate default_node_pool was removed and replaced with in-block ternary
  • Renamed CHANGLOG.md to CHANGELOG.md

Security

  • In-repo ssh module. This generated a local file on the agent with a private key (highly unsafe!) and was unused. Replaced by a tls_private_key block without a file output

Removed

  • max_node, min_node and node_count are not mutually exclusive.
  • agent_ particle in variables.tf. AKS uses node pools and nodes in their terminology

@ghost
Copy link

ghost commented May 9, 2022

CLA assistant check
All CLA requirements met.

main.tf Outdated
count = var.enable_log_analytics_workspace ? 1 : 0
name = var.cluster_log_analytics_workspace_name == null ? "${var.prefix}-workspace" : var.cluster_log_analytics_workspace_name
count = var.log_analytics_workspace_enabled ? 1 : 0
name = var.log_analytics_workspace_name == null ? "${try(var.prefix, var.cluster_name)}-workspace" : var.log_analytics_workspace_name

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I was testing this, I ran into an issue here:

│ Error: Invalid template interpolation value
│
│   on .terraform/modules/aks/main.tf line 108, in resource "azurerm_log_analytics_workspace" "main":
│  108:   name                = var.log_analytics_workspace_name == null ? "${try(var.prefix, var.cluster_name)}-workspace" : var.log_analytics_workspace_name
│     ├────────────────
│     │ var.cluster_name is "test-aks"
│     │ var.prefix is null
│
│ The expression result is null. Cannot include a null value in a string template.

What fixed it for me--and what I'd recommend--is to use coalesce(var.cluster_name, var.prefix) instead of try().

@caruccio
Copy link

Hey @rdvansloten. Would you be willing to add changes from caruccio@25a99a4?
Feel free to copy+paste as you wish.

@llabake
Copy link

llabake commented May 11, 2022

@rdvansloten, are you able to add this change Attaching a container registry to a Kubernetes cluster.

@rdvansloten
Copy link
Author

@caruccio Added Key Vault and maintenance support.

key_vault_secrets_provider_enabled = false
allowed_maintenance_windows = [
    {
      day   = "Saturday"
      hours = [01]
    },
    {
      day   = "Sunday"
      hours = [01]
    }
  ]

@henworth Added your coalesce() fix.

var.log_analytics_workspace_name == null ? "${coalesce(var.cluster_name, var.prefix)}-workspace" : var.log_analytics_workspace_name

@llabake Container registry can be added:

azure_container_registry_id        = azurerm_container_registry.main.id
azure_container_registry_enabled   = false

@rdvansloten
Copy link
Author

@lonegunmanb @yupwei68

@lonegunmanb
Copy link
Member

lonegunmanb commented May 16, 2022

Hi all, @yupwei68 is no longer maintaining this repo, and I'll take over. We're working on a new Azure module ci pipeline so we can merge further pull requests in a high velocity and keep us safe from common issues and breaking changes. The first version with this new ci pipeline won't touch module's code, and all remain pull requests will be tested by this pipeline.

The feature branch with pipeline is here, and a demo pull request is here, anyone who's interesting in how this pipeline works is welcomed to give it a review.

Our plan is to release a new major version first, with new pipeline, and NO change to the module's code. It'll make things easier. After that, we'll merge some pull requests that has no breaking changes, and no surprise changes (means no change in tfplan by default). Then another new major version with all these pr with breaking changes will be released.

As for this pull request, I think it's very large to review. The smaller and more cohesion the pr is, the easier it can be reviewed and merged.

I'm testing some small pr with our new pipeline now, as for this pr, since it'll introduce breaking changes, I'll review it later after I've merged all no-breaking pull requests. Thank you all for your contribution to this module, please give us some time.

@rdvansloten
Copy link
Author

Hi @lonegunmanb, thank you for the feedback. Yeah, it's a big overhaul, but a lot of features that are required to use this in an enterprise setting were more or less missing or not added yet.

@lonegunmanb
Copy link
Member

Hi @rdvansloten , we'd like to upgrade this module to v5.0.0 first. In v5 we won't merge any breaking change into the module, only introduce the ci pipeline and acc tests, maybe some non-breaking changes too. After that, we'll upgrade to v6, which I think we'll merge your pr. Would you please rebase to v5 version and update your pr when we start upgrade work to v6? Thanks.

output "system_assigned_identity" {
value = azurerm_kubernetes_cluster.main.identity
output "identity" {
value = {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO, returning attributes one-by-one is prune to breaking changes from the resource provider.
Maybe just returning value = azurerm_kubernetes_cluster.main.identity[0] is a better approach.
WYT?

enable_node_public_ip = var.enable_node_public_ip
zones = var.zones
node_labels = var.node_labels
node_taints = var.node_taints
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to terraform-azurerm-provider pr #8982, since the v2.35.0, the node_taints argument is no longer be configured.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will change this field.

@github-actions
Copy link
Contributor

MAIN BRANCH PUSH DETECTED DUE TO #241, THIS PR WILL BE UPDATED.

@github-actions
Copy link
Contributor

MAIN BRANCH PUSH DETECTED DUE TO #, THIS PR WILL BE UPDATED.

1 similar comment
@github-actions
Copy link
Contributor

MAIN BRANCH PUSH DETECTED DUE TO #, THIS PR WILL BE UPDATED.

@github-actions
Copy link
Contributor

MAIN BRANCH PUSH DETECTED DUE TO #, THIS PR NEED TO BE UPDATED TO TRIGGER CI.

@lonegunmanb
Copy link
Member

I'm closing this pr since we've upgraded azurerm to 3.0.0+. Thanks for opening this pr @rdvansloten .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants