-
Notifications
You must be signed in to change notification settings - Fork 477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade to azurerm 3.0.0+, major improvements, security fixes, Terraform logic and spelling error fixes #164
Conversation
main.tf
Outdated
count = var.enable_log_analytics_workspace ? 1 : 0 | ||
name = var.cluster_log_analytics_workspace_name == null ? "${var.prefix}-workspace" : var.cluster_log_analytics_workspace_name | ||
count = var.log_analytics_workspace_enabled ? 1 : 0 | ||
name = var.log_analytics_workspace_name == null ? "${try(var.prefix, var.cluster_name)}-workspace" : var.log_analytics_workspace_name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I was testing this, I ran into an issue here:
│ Error: Invalid template interpolation value
│
│ on .terraform/modules/aks/main.tf line 108, in resource "azurerm_log_analytics_workspace" "main":
│ 108: name = var.log_analytics_workspace_name == null ? "${try(var.prefix, var.cluster_name)}-workspace" : var.log_analytics_workspace_name
│ ├────────────────
│ │ var.cluster_name is "test-aks"
│ │ var.prefix is null
│
│ The expression result is null. Cannot include a null value in a string template.
What fixed it for me--and what I'd recommend--is to use coalesce(var.cluster_name, var.prefix)
instead of try()
.
Hey @rdvansloten. Would you be willing to add changes from caruccio@25a99a4? |
@rdvansloten, are you able to add this change Attaching a container registry to a Kubernetes cluster. |
@caruccio Added Key Vault and maintenance support.
@henworth Added your
@llabake Container registry can be added:
|
Hi all, @yupwei68 is no longer maintaining this repo, and I'll take over. We're working on a new Azure module ci pipeline so we can merge further pull requests in a high velocity and keep us safe from common issues and breaking changes. The first version with this new ci pipeline won't touch module's code, and all remain pull requests will be tested by this pipeline. The feature branch with pipeline is here, and a demo pull request is here, anyone who's interesting in how this pipeline works is welcomed to give it a review. Our plan is to release a new major version first, with new pipeline, and NO change to the module's code. It'll make things easier. After that, we'll merge some pull requests that has no breaking changes, and no surprise changes (means no change in tfplan by default). Then another new major version with all these pr with breaking changes will be released. As for this pull request, I think it's very large to review. The smaller and more cohesion the pr is, the easier it can be reviewed and merged. I'm testing some small pr with our new pipeline now, as for this pr, since it'll introduce breaking changes, I'll review it later after I've merged all no-breaking pull requests. Thank you all for your contribution to this module, please give us some time. |
Hi @lonegunmanb, thank you for the feedback. Yeah, it's a big overhaul, but a lot of features that are required to use this in an enterprise setting were more or less missing or not added yet. |
Hi @rdvansloten , we'd like to upgrade this module to v5.0.0 first. In v5 we won't merge any breaking change into the module, only introduce the ci pipeline and acc tests, maybe some non-breaking changes too. After that, we'll upgrade to v6, which I think we'll merge your pr. Would you please rebase to v5 version and update your pr when we start upgrade work to v6? Thanks. |
output "system_assigned_identity" { | ||
value = azurerm_kubernetes_cluster.main.identity | ||
output "identity" { | ||
value = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO, returning attributes one-by-one is prune to breaking changes from the resource provider.
Maybe just returning value = azurerm_kubernetes_cluster.main.identity[0]
is a better approach.
WYT?
enable_node_public_ip = var.enable_node_public_ip | ||
zones = var.zones | ||
node_labels = var.node_labels | ||
node_taints = var.node_taints |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to terraform-azurerm-provider pr #8982, since the v2.35.0, the node_taints
argument is no longer be configured.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will change this field.
MAIN BRANCH PUSH DETECTED DUE TO #241, THIS PR WILL BE UPDATED. |
MAIN BRANCH PUSH DETECTED DUE TO #, THIS PR WILL BE UPDATED. |
1 similar comment
MAIN BRANCH PUSH DETECTED DUE TO #, THIS PR WILL BE UPDATED. |
MAIN BRANCH PUSH DETECTED DUE TO #, THIS PR NEED TO BE UPDATED TO TRIGGER CI. |
I'm closing this pr since we've upgraded azurerm to 3.0.0+. Thanks for opening this pr @rdvansloten . |
Added
node_taints
to default node poolChanged
prefix
anddns_prefix
, as they might not be the sameFixed
role_based_access_control
has been deprecated in favor ofazure_active_directory_role_based_access_control
addon_profile
block has been deprecated in azurerm 3.0default_node_pool
was removed and replaced with in-block ternarySecurity
tls_private_key
block without a file outputRemoved
max_node
,min_node
andnode_count
are not mutually exclusive.agent_
particle in variables.tf. AKS uses node pools and nodes in their terminology