Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validating webhook should take defaults into account #9132

Closed
cwrau opened this issue Aug 7, 2023 · 20 comments
Closed

Validating webhook should take defaults into account #9132

cwrau opened this issue Aug 7, 2023 · 20 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@cwrau
Copy link
Contributor

cwrau commented Aug 7, 2023

What steps did you take and what happened?

When creating a KubeadmControlPlaneTemplate without specifying fields that are set by defaults, like spec.template.spec.kubeadmConfigSpec.clusterConfiguration.dns (see example), and you try to re-apply it the validation webhook throws an error as the specs are different.

example;

---
# Source: t8s-cluster/templates/management-cluster/clusterClass/kubeadmnControlPlaneTemplate/kubeadmControlPlaneTemplate.yaml
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
metadata:
  name: test-8a985a14
  namespace: test
spec:
  template:
    spec:
      kubeadmConfigSpec:
        clusterConfiguration:
          apiServer:
            extraArgs:
              cloud-provider: external
        initConfiguration:
          nodeRegistration:
            kubeletExtraArgs:
              cloud-provider: external
            name: '{{ local_hostname }}'
        joinConfiguration:
          nodeRegistration:
            kubeletExtraArgs:
              cloud-provider: external
            name: '{{ local_hostname }}'

See https://github.com/teutonet/teutonet-helm-charts/tree/main/charts/t8s-cluster for the full example.

We're even generating hashes for these resources as they can't be updated but we of course don't include the default fields which then results in the same hash (we even thought about a hash collision before coming to the realization that it's about defaults 😅) which then will be sent to the apiserver by flux and then results in the KubeadmControlPlaneTemplate spec.template.spec field is immutable. Please create new resource instead. error.

What did you expect to happen?

That the validating webhook won't throw an "immutable" error when using the exact same resource from the user's perspective.

Cluster API version

1.4.3

Kubernetes version

Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.4", GitCommit:"fa3d7990104d7c1f16943a67f11b154b71f6a132", GitTreeState:"archive", BuildDate:"2023-07-20T07:37:53Z", GoVersion:"go1.20.6", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.8", GitCommit:"0ce7342c984110dfc93657d64df5dc3b2c0d1fe9", GitTreeState:"clean", BuildDate:"2023-03-15T13:33:02Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/amd64"}

Anything else you would like to add?

The validating webhook should merge the new resource with the defaults so the "real" new resource will be used instead of the one without the defaults or these defaults should be dropped, not included in the k8s resource or should be required instead.

Label(s) to be applied

/kind bug
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Aug 7, 2023
@killianmuldoon
Copy link
Contributor

/triage accepted

The webhooks in CAPI should ordinarily take defaulting into account as webhook defaulting should be run on objects before they are validated. That said - I don't think the field spec.template.spec.kubeadmConfigSpec.clusterConfiguration.dns is defaulted by Cluster API - is it being defaulted by some other component?

@killianmuldoon
Copy link
Contributor

killianmuldoon commented Aug 7, 2023

As an aside this looks like it's generally related to the area of how to do gitops correclty in CAPI. IMO - as from issue #8479 - we first need an understanding of where the overall problems with using CAPI + gitops lie today. We don't have any tests or documentation for how to do this in the core repo, but it is something that is requested quite a bit from the community.

Depending on further detail on this this issue may also be relevant: #8479

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Aug 7, 2023
@cwrau
Copy link
Contributor Author

cwrau commented Aug 7, 2023

/triage accepted

The webhooks in CAPI should ordinarily take defaulting into account as webhook defaulting should be run on objects before they are validated. That said - I don't think the field spec.template.spec.kubeadmConfigSpec.clusterConfiguration.dns is defaulted by Cluster API - is it being defaulted by some other component?

We don't do any defaulting on our side as you can see in my example or the referenced helm chart. Also all of our kubeadmControlPlaneTemplates have this field set to an empty dict {} so it's not an outlier.

As an aside this looks like it's generally related to the area of how to do gitops correclty in CAPI. IMO - as from issue #8479 - we first need an understanding of where the overall problems with using CAPI + gitops lie today. We don't have any tests or documentation for how to do this in the core repo, but it is something that is requested quite a bit from the community.

Depending on further detail on this this issue may also be relevant: #8479

This is not specific to gitops, that's just how it surfaced for us.
If we do a manual kubectl apply the error still happens.

@killianmuldoon
Copy link
Contributor

On my CAPI cluster I can apply the below yaml multiple times and without the error. Could you supply both the yaml you're applying and the way the object looks on the API server? Is there any additional detail in the error message from the webhook about which immutable fields can not be altered?

---
# Source: t8s-cluster/templates/management-cluster/clusterClass/kubeadmnControlPlaneTemplate/kubeadmControlPlaneTemplate.yaml
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
metadata:
  name: test-8a985a14
  namespace: test
spec:
  template:
    spec:
      kubeadmConfigSpec:
        clusterConfiguration:
          apiServer:
            extraArgs:
              cloud-provider: external
        initConfiguration:
          nodeRegistration:
            kubeletExtraArgs:
              cloud-provider: external
            name: '{{ local_hostname }}'
        joinConfiguration:
          nodeRegistration:
            kubeletExtraArgs:
              cloud-provider: external
            name: '{{ local_hostname }}'

@sbueringer
Copy link
Member

Yeah defaulting is always run before validation. That is just the way Kubernetes works.

@cwrau
Copy link
Contributor Author

cwrau commented Aug 7, 2023

The exact yaml I'm applying (hr.yaml);

# Source: t8s-cluster/templates/management-cluster/clusterClass/kubeadmnControlPlaneTemplate/kubeadmControlPlaneTemplate.yaml
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
metadata:
  name: 1111-greensta-prod-8a985a14
  namespace: 1111-greensta-prod
  labels:
    app.kubernetes.io/name: t8s-cluster
    helm.sh/chart: t8s-cluster-1.3.3
    app.kubernetes.io/instance: 1111-greensta-prod
    app.kubernetes.io/managed-by: Helm
spec:
  template:
    spec:
      # the full context is needed for .Files.Get
      kubeadmConfigSpec:
        clusterConfiguration:
          apiServer:
            extraArgs:
              admission-control-config-file: &admissionControlConfigFilePath /etc/kubernetes/admission-control-config.yaml
              cloud-provider: external
              enable-admission-plugins: AlwaysPullImages,EventRateLimit,NodeRestriction
              profiling: "false"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            extraVolumes:
              - hostPath: *admissionControlConfigFilePath
                mountPath: *admissionControlConfigFilePath
                name: admission-control-config
                readOnly: true
              - hostPath: &eventRateLimitConfigFilePath /etc/kubernetes/event-rate-limit-config.yaml
                mountPath: *eventRateLimitConfigFilePath
                name: event-rate-limit-config
                readOnly: true
          controllerManager:
            extraArgs:
              authorization-always-allow-paths: /healthz,/readyz,/livez,/metrics
              bind-address: 0.0.0.0
              cloud-provider: external
              profiling: "false"
              terminated-pod-gc-threshold: "100"
          etcd:
            local:
              extraArgs:
                listen-metrics-urls: http://0.0.0.0:2381
          scheduler:
            extraArgs:
              authorization-always-allow-paths: /healthz,/readyz,/livez,/metrics
              bind-address: 0.0.0.0
              profiling: "false"
        files:
          - content: |-
              apiVersion: apiserver.config.k8s.io/v1
              kind: AdmissionConfiguration
              plugins:
                - name: EventRateLimit
                  path: event-rate-limit-config.yaml
            path: *admissionControlConfigFilePath
          - content: |-
              apiVersion: eventratelimit.admission.k8s.io/v1alpha1
              kind: Configuration
              limits:
                - type: Namespace
                  qps: 50
                  burst: 100
                - type: SourceAndObject
                  qps: 10
                  burst: 50
            path: *eventRateLimitConfigFilePath
          - content: |-
              #!/usr/bin/env bash

              #
              # (PK) I couldn't find a better/simpler way to conifgure it. See:
              # https://github.com/kubernetes-sigs/cluster-api/issues/4512
              #

              set -o errexit
              set -o nounset
              set -o pipefail

              dir=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
              readonly dir

              # Exit fast if already appended.
              if [[ ! -f ${dir}/kube-proxy-config.yaml ]]; then
                exit 0
              fi

              # kubeadm config is in different directory in Flatcar (/etc) and Ubuntu (/run/kubeadm).
              kubeadm_file="/etc/kubeadm.yml"
              if [[ ! -f ${kubeadm_file} ]]; then
                kubeadm_file="/run/kubeadm/kubeadm.yaml"
              fi

              # Run this script only if this is the init node.
              if [[ ! -f ${kubeadm_file} ]]; then
                exit 0
              fi

              echo success > /tmp/kube-proxy-patch

              cat "${dir}/kube-proxy-config.yaml" >> "${kubeadm_file}"
              rm "${dir}/kube-proxy-config.yaml"

            path: /etc/kube-proxy-patch.sh
            permissions: "0700"
          - content: |-
              ---
              apiVersion: kubeproxy.config.k8s.io/v1alpha1
              kind: KubeProxyConfiguration
              metricsBindAddress: "0.0.0.0"

            path: /etc/kube-proxy-config.yaml
          - content: |-
              [plugins]
                [plugins."io.containerd.grpc.v1.cri".registry]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
                      endpoint = ["https://harbor.teuto.net/v2/docker.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
                      endpoint = ["https://harbor.teuto.net/v2/gcr.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."ghcr.io"]
                      endpoint = ["https://harbor.teuto.net/v2/ghcr.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."hub.docker.com"]
                      endpoint = ["https://harbor.teuto.net/v2/hub.docker.com"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."index.docker.io"]
                      endpoint = ["https://harbor.teuto.net/v2/index.docker.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
                      endpoint = ["https://harbor.teuto.net/v2/k8s.gcr.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
                      endpoint = ["https://harbor.teuto.net/v2/quay.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.gitlab.com"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.gitlab.com"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.k8s.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.opensource.zalan.do"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.opensource.zalan.do"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.teuto.io"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.teuto.io"]
            path: /etc/containerd/conf.d/teuto-mirror.toml
        initConfiguration:
          nodeRegistration:
            kubeletExtraArgs:
              cloud-provider: external
              event-qps: "0"
              feature-gates: SeccompDefault=true
              protect-kernel-defaults: "true"
              seccomp-default: "true"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            name: "{{ local_hostname }}"
        joinConfiguration:
          nodeRegistration:
            kubeletExtraArgs:
              cloud-provider: external
              event-qps: "0"
              feature-gates: SeccompDefault=true
              protect-kernel-defaults: "true"
              seccomp-default: "true"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            name: "{{ local_hostname }}"
        preKubeadmCommands:
          - bash /etc/kube-proxy-patch.sh

The yaml in the cluster;

apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
metadata:
  annotations:
    meta.helm.sh/release-name: 1111-greensta-prod
    meta.helm.sh/release-namespace: 1111-greensta-prod
  creationTimestamp: "2023-05-08T12:01:05Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: 1111-greensta-prod
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: t8s-cluster
    helm.sh/chart: t8s-cluster-1.3.3
    helm.toolkit.fluxcd.io/name: 1111-greensta-prod
    helm.toolkit.fluxcd.io/namespace: 1111-greensta-prod
  name: 1111-greensta-prod-8a985a14
  namespace: 1111-greensta-prod
  ownerReferences:
  - apiVersion: cluster.x-k8s.io/v1beta1
    kind: ClusterClass
    name: 1111-greensta-prod
    uid: fd265dfa-88fe-48de-a0eb-131a50b1dbfc
  resourceVersion: "412166616"
  uid: a75075d8-f783-4671-8c4e-c7c334d168ca
spec:
  template:
    spec:
      kubeadmConfigSpec:
        clusterConfiguration:
          apiServer:
            extraArgs:
              admission-control-config-file: /etc/kubernetes/admission-control-config.yaml
              cloud-provider: external
              enable-admission-plugins: AlwaysPullImages,EventRateLimit,NodeRestriction
              profiling: "false"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            extraVolumes:
            - hostPath: /etc/kubernetes/admission-control-config.yaml
              mountPath: /etc/kubernetes/admission-control-config.yaml
              name: admission-control-config
              readOnly: true
            - hostPath: /etc/kubernetes/event-rate-limit-config.yaml
              mountPath: /etc/kubernetes/event-rate-limit-config.yaml
              name: event-rate-limit-config
              readOnly: true
          controllerManager:
            extraArgs:
              authorization-always-allow-paths: /healthz,/readyz,/livez,/metrics
              bind-address: 0.0.0.0
              cloud-provider: external
              profiling: "false"
              terminated-pod-gc-threshold: "100"
          dns: {}
          etcd:
            local:
              extraArgs:
                listen-metrics-urls: http://0.0.0.0:2381
          networking: {}
          scheduler:
            extraArgs:
              authorization-always-allow-paths: /healthz,/readyz,/livez,/metrics
              bind-address: 0.0.0.0
              profiling: "false"
        files:
        - content: |-
            apiVersion: apiserver.config.k8s.io/v1
            kind: AdmissionConfiguration
            plugins:
              - name: EventRateLimit
                path: event-rate-limit-config.yaml
          path: /etc/kubernetes/admission-control-config.yaml
        - content: |-
            apiVersion: eventratelimit.admission.k8s.io/v1alpha1
            kind: Configuration
            limits:
              - type: Namespace
                qps: 50
                burst: 100
              - type: SourceAndObject
                qps: 10
                burst: 50
          path: /etc/kubernetes/event-rate-limit-config.yaml
        - content: |-
            #!/usr/bin/env bash

            #
            # (PK) I couldn't find a better/simpler way to conifgure it. See:
            # https://github.com/kubernetes-sigs/cluster-api/issues/4512
            #

            set -o errexit
            set -o nounset
            set -o pipefail

            dir=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
            readonly dir

            # Exit fast if already appended.
            if [[ ! -f ${dir}/kube-proxy-config.yaml ]]; then
              exit 0
            fi

            # kubeadm config is in different directory in Flatcar (/etc) and Ubuntu (/run/kubeadm).
            kubeadm_file="/etc/kubeadm.yml"
            if [[ ! -f ${kubeadm_file} ]]; then
              kubeadm_file="/run/kubeadm/kubeadm.yaml"
            fi

            # Run this script only if this is the init node.
            if [[ ! -f ${kubeadm_file} ]]; then
              exit 0
            fi

            echo success > /tmp/kube-proxy-patch

            cat "${dir}/kube-proxy-config.yaml" >> "${kubeadm_file}"
            rm "${dir}/kube-proxy-config.yaml"
          path: /etc/kube-proxy-patch.sh
          permissions: "0700"
        - content: |-
            ---
            apiVersion: kubeproxy.config.k8s.io/v1alpha1
            kind: KubeProxyConfiguration
            metricsBindAddress: "0.0.0.0"
          path: /etc/kube-proxy-config.yaml
        - content: |-
            [plugins]
              [plugins."io.containerd.grpc.v1.cri".registry]
                [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
                    endpoint = ["https://harbor.teuto.net/v2/docker.io"]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
                    endpoint = ["https://harbor.teuto.net/v2/gcr.io"]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."ghcr.io"]
                    endpoint = ["https://harbor.teuto.net/v2/ghcr.io"]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."hub.docker.com"]
                    endpoint = ["https://harbor.teuto.net/v2/hub.docker.com"]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."index.docker.io"]
                    endpoint = ["https://harbor.teuto.net/v2/index.docker.io"]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
                    endpoint = ["https://harbor.teuto.net/v2/k8s.gcr.io"]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
                    endpoint = ["https://harbor.teuto.net/v2/quay.io"]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.gitlab.com"]
                    endpoint = ["https://harbor.teuto.net/v2/registry.gitlab.com"]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]
                    endpoint = ["https://harbor.teuto.net/v2/registry.k8s.io"]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.opensource.zalan.do"]
                    endpoint = ["https://harbor.teuto.net/v2/registry.opensource.zalan.do"]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.teuto.io"]
                    endpoint = ["https://harbor.teuto.net/v2/registry.teuto.io"]
          path: /etc/containerd/conf.d/teuto-mirror.toml
        format: cloud-config
        initConfiguration:
          localAPIEndpoint: {}
          nodeRegistration:
            kubeletExtraArgs:
              cloud-provider: external
              event-qps: "0"
              feature-gates: SeccompDefault=true
              protect-kernel-defaults: "true"
              seccomp-default: "true"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            name: '{{ local_hostname }}'
        joinConfiguration:
          discovery: {}
          nodeRegistration:
            kubeletExtraArgs:
              cloud-provider: external
              event-qps: "0"
              feature-gates: SeccompDefault=true
              protect-kernel-defaults: "true"
              seccomp-default: "true"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            name: '{{ local_hostname }}'
        preKubeadmCommands:
        - bash /etc/kube-proxy-patch.sh
      rolloutStrategy:
        rollingUpdate:
          maxSurge: 1
        type: RollingUpdate

$ k -n 1111-greensta-prod apply -f /tmp/hr.yaml --dry-run=server

dry run; Warning: resource kubeadmcontrolplanetemplates/1111-greensta-prod-8a985a14 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. The KubeadmControlPlaneTemplate "1111-greensta-prod-8a985a14" is invalid: spec.template.spec: Invalid value: v1beta1.KubeadmControlPlaneTemplate{TypeMeta:v1.TypeMeta{Kind:"KubeadmControlPlaneTemplate", APIVersion:"controlplane.cluster.x-k8s.io/v1beta1"}, ObjectMeta:v1.ObjectMeta{Name:"1111-greensta-prod-8a985a14", GenerateName:"", Namespace:"1111-greensta-prod", SelfLink:"", UID:"a75075d8-f783-4671-8c4e-c7c334d168ca", ResourceVersion:"412166616", Generation:2, CreationTimestamp:time.Date(2023, time.May, 8, 12, 1, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/instance":"1111-greensta-prod", "app.kubernetes.io/managed-by":"Helm", "app.kubernetes.io/name":"t8s-cluster", "helm.sh/chart":"t8s-cluster-1.3.3", "helm.toolkit.fluxcd.io/name":"1111-greensta-prod", "helm.toolkit.fluxcd.io/namespace":"1111-greensta-prod"}, Annotations:map[string]string{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"controlplane.cluster.x-k8s.io/v1beta1\",\"kind\":\"KubeadmControlPlaneTemplate\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/instance\":\"1111-greensta-prod\",\"app.kubernetes.io/managed-by\":\"Helm\",\"app.kubernetes.io/name\":\"t8s-cluster\",\"helm.sh/chart\":\"t8s-cluster-1.3.3\"},\"name\":\"1111-greensta-prod-8a985a14\",\"namespace\":\"1111-greensta-prod\"},\"spec\":{\"template\":{\"spec\":{\"kubeadmConfigSpec\":{\"clusterConfiguration\":{\"apiServer\":{\"extraArgs\":{\"admission-control-config-file\":\"/etc/kubernetes/admission-control-config.yaml\",\"cloud-provider\":\"external\",\"enable-admission-plugins\":\"AlwaysPullImages,EventRateLimit,NodeRestriction\",\"profiling\":\"false\",\"tls-cipher-suites\":\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\"},\"extraVolumes\":[{\"hostPath\":\"/etc/kubernetes/admission-control-config.yaml\",\"mountPath\":\"/etc/kubernetes/admission-control-config.yaml\",\"name\":\"admission-control-config\",\"readOnly\":true},{\"hostPath\":\"/etc/kubernetes/event-rate-limit-config.yaml\",\"mountPath\":\"/etc/kubernetes/event-rate-limit-config.yaml\",\"name\":\"event-rate-limit-config\",\"readOnly\":true}]},\"controllerManager\":{\"extraArgs\":{\"authorization-always-allow-paths\":\"/healthz,/readyz,/livez,/metrics\",\"bind-address\":\"0.0.0.0\",\"cloud-provider\":\"external\",\"profiling\":\"false\",\"terminated-pod-gc-threshold\":\"100\"}},\"etcd\":{\"local\":{\"extraArgs\":{\"listen-metrics-urls\":\"http://0.0.0.0:2381\"}}},\"scheduler\":{\"extraArgs\":{\"authorization-always-allow-paths\":\"/healthz,/readyz,/livez,/metrics\",\"bind-address\":\"0.0.0.0\",\"profiling\":\"false\"}}},\"files\":[{\"content\":\"apiVersion: apiserver.config.k8s.io/v1\\nkind: AdmissionConfiguration\\nplugins:\\n - name: EventRateLimit\\n path: event-rate-limit-config.yaml\",\"path\":\"/etc/kubernetes/admission-control-config.yaml\"},{\"content\":\"apiVersion: eventratelimit.admission.k8s.io/v1alpha1\\nkind: Configuration\\nlimits:\\n - type: Namespace\\n qps: 50\\n burst: 100\\n - type: SourceAndObject\\n qps: 10\\n burst: 50\",\"path\":\"/etc/kubernetes/event-rate-limit-config.yaml\"},{\"content\":\"#!/usr/bin/env bash\\n\\n#\\n# (PK) I couldn't find a better/simpler way to conifgure it. See:\\n# https://github.com//issues/4512\\n#\\n\\nset -o errexit\\nset -o nounset\\nset -o pipefail\\n\\ndir=$( cd \\\"$( dirname \\\"${BASH_SOURCE[0]}\\\" )\\\" \\u0026\\u0026 pwd )\\nreadonly dir\\n\\n# Exit fast if already appended.\\nif [[ ! -f ${dir}/kube-proxy-config.yaml ]]; then\\n exit 0\\nfi\\n\\n# kubeadm config is in different directory in Flatcar (/etc) and Ubuntu (/run/kubeadm).\\nkubeadm_file=\\\"/etc/kubeadm.yml\\\"\\nif [[ ! -f ${kubeadm_file} ]]; then\\n kubeadm_file=\\\"/run/kubeadm/kubeadm.yaml\\\"\\nfi\\n\\n# Run this script only if this is the init node.\\nif [[ ! -f ${kubeadm_file} ]]; then\\n exit 0\\nfi\\n\\necho success \\u003e /tmp/kube-proxy-patch\\n\\ncat \\\"${dir}/kube-proxy-config.yaml\\\" \\u003e\\u003e \\\"${kubeadm_file}\\\"\\nrm \\\"${dir}/kube-proxy-config.yaml\\\"\",\"path\":\"/etc/kube-proxy-patch.sh\",\"permissions\":\"0700\"},{\"content\":\"---\\napiVersion: kubeproxy.config.k8s.io/v1alpha1\\nkind: KubeProxyConfiguration\\nmetricsBindAddress: \\\"0.0.0.0\\\"\",\"path\":\"/etc/kube-proxy-config.yaml\"},{\"content\":\"[plugins]\\n [plugins.\\\"io.containerd.grpc.v1.cri\\\".registry]\\n [plugins.\\\"io.containerd.grpc.v1.cri\\\".registry.mirrors]\\n [plugins.\\\"io.containerd.grpc.v1.cri\\\".registry.mirrors.\\\"docker.io\\\"]\\n endpoint = [\\\"https://harbor.teuto.net/v2/docker.io\\\"]\\n [plugins.\\\"io.containerd.grpc.v1.cri\\\".registry.mirrors.\\\"gcr.io\\\"]\\n endpoint = [\\\"https://harbor.teuto.net/v2/gcr.io\\\"]\\n [plugins.\\\"io.containerd.grpc.v1.cri\\\".registry.mirrors.\\\"ghcr.io\\\"]\\n endpoint = [\\\"https://harbor.teuto.net/v2/ghcr.io\\\"]\\n [plugins.\\\"io.containerd.grpc.v1.cri\\\".registry.mirrors.\\\"hub.docker.com\\\"]\\n endpoint = [\\\"https://harbor.teuto.net/v2/hub.docker.com\\\"]\\n [plugins.\\\"io.containerd.grpc.v1.cri\\\".registry.mirrors.\\\"index.docker.io\\\"]\\n endpoint = [\\\"https://harbor.teuto.net/v2/index.docker.io\\\"]\\n [plugins.\\\"io.containerd.grpc.v1.cri\\\".registry.mirrors.\\\"k8s.gcr.io\\\"]\\n endpoint = [\\\"https://harbor.teuto.net/v2/k8s.gcr.io\\\"]\\n [plugins.\\\"io.containerd.grpc.v1.cri\\\".registry.mirrors.\\\"quay.io\\\"]\\n endpoint = [\\\"https://harbor.teuto.net/v2/quay.io\\\"]\\n [plugins.\\\"io.containerd.grpc.v1.cri\\\".registry.mirrors.\\\"registry.gitlab.com\\\"]\\n endpoint = [\\\"https://harbor.teuto.net/v2/registry.gitlab.com\\\"]\\n [plugins.\\\"io.containerd.grpc.v1.cri\\\".registry.mirrors.\\\"registry.k8s.io\\\"]\\n endpoint = [\\\"https://harbor.teuto.net/v2/registry.k8s.io\\\"]\\n [plugins.\\\"io.containerd.grpc.v1.cri\\\".registry.mirrors.\\\"registry.opensource.zalan.do\\\"]\\n endpoint = [\\\"https://harbor.teuto.net/v2/registry.opensource.zalan.do\\\"]\\n [plugins.\\\"io.containerd.grpc.v1.cri\\\".registry.mirrors.\\\"registry.teuto.io\\\"]\\n endpoint = [\\\"https://harbor.teuto.net/v2/registry.teuto.io\\\"]\",\"path\":\"/etc/containerd/conf.d/teuto-mirror.toml\"}],\"initConfiguration\":{\"nodeRegistration\":{\"kubeletExtraArgs\":{\"cloud-provider\":\"external\",\"event-qps\":\"0\",\"feature-gates\":\"SeccompDefault=true\",\"protect-kernel-defaults\":\"true\",\"seccomp-default\":\"true\",\"tls-cipher-suites\":\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\"},\"name\":\"{{ local_hostname }}\"}},\"joinConfiguration\":{\"nodeRegistration\":{\"kubeletExtraArgs\":{\"cloud-provider\":\"external\",\"event-qps\":\"0\",\"feature-gates\":\"SeccompDefault=true\",\"protect-kernel-defaults\":\"true\",\"seccomp-default\":\"true\",\"tls-cipher-suites\":\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\"},\"name\":\"{{ local_hostname }}\"}},\"preKubeadmCommands\":[\"bash /etc/kube-proxy-patch.sh\"]}}}}}\n", "meta.helm.sh/release-name":"1111-greensta-prod", "meta.helm.sh/release-namespace":"1111-greensta-prod"}, OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"cluster.x-k8s.io/v1beta1", Kind:"ClusterClass", Name:"1111-greensta-prod", UID:"fd265dfa-88fe-48de-a0eb-131a50b1dbfc", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"manager", Operation:"Update", APIVersion:"controlplane.cluster.x-k8s.io/v1beta1", Time:time.Date(2023, time.May, 8, 12, 1, 5, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0018bee28), Subresource:""}, v1.ManagedFieldsEntry{Manager:"helm-controller", Operation:"Update", APIVersion:"controlplane.cluster.x-k8s.io/v1beta1", Time:time.Date(2023, time.May, 17, 13, 49, 59, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0018bee58), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"controlplane.cluster.x-k8s.io/v1beta1", Time:time.Date(2023, time.August, 7, 15, 10, 6, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0018bee88), Subresource:""}}}, Spec:v1beta1.KubeadmControlPlaneTemplateSpec{Template:v1beta1.KubeadmControlPlaneTemplateResource{ObjectMeta:v1beta1.ObjectMeta{Labels:map[string]string(nil), Annotations:map[string]string(nil)}, Spec:v1beta1.KubeadmControlPlaneTemplateResourceSpec{MachineTemplate:(*v1beta1.KubeadmControlPlaneTemplateMachineTemplate)(nil), KubeadmConfigSpec:v1beta1.KubeadmConfigSpec{ClusterConfiguration:(*v1beta1.ClusterConfiguration)(0xc0032b1e40), InitConfiguration:(*v1beta1.InitConfiguration)(0xc0046eefc0), JoinConfiguration:(*v1beta1.JoinConfiguration)(0xc00338ca50), Files:[]v1beta1.File{v1beta1.File{Path:"/etc/kubernetes/admission-control-config.yaml", Owner:"", Permissions:"", Encoding:"", Append:false, Content:"apiVersion: apiserver.config.k8s.io/v1\nkind: AdmissionConfiguration\nplugins:\n - name: EventRateLimit\n path: event-rate-limit-config.yaml", ContentFrom:(*v1beta1.FileSource)(nil)}, v1beta1.File{Path:"/etc/kubernetes/event-rate-limit-config.yaml", Owner:"", Permissions:"", Encoding:"", Append:false, Content:"apiVersion: eventratelimit.admission.k8s.io/v1alpha1\nkind: Configuration\nlimits:\n - type: Namespace\n qps: 50\n burst: 100\n - type: SourceAndObject\n qps: 10\n burst: 50", ContentFrom:(*v1beta1.FileSource)(nil)}, v1beta1.File{Path:"/etc/kube-proxy-patch.sh", Owner:"", Permissions:"0700", Encoding:"", Append:false, Content:"#!/usr/bin/env bash\n\n#\n# (PK) I couldn't find a better/simpler way to conifgure it. See:\n# https://github.com//issues/4512\n#\n\nset -o errexit\nset -o nounset\nset -o pipefail\n\ndir=$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\nreadonly dir\n\n# Exit fast if already appended.\nif [[ ! -f ${dir}/kube-proxy-config.yaml ]]; then\n exit 0\nfi\n\n# kubeadm config is in different directory in Flatcar (/etc) and Ubuntu (/run/kubeadm).\nkubeadm_file=\"/etc/kubeadm.yml\"\nif [[ ! -f ${kubeadm_file} ]]; then\n kubeadm_file=\"/run/kubeadm/kubeadm.yaml\"\nfi\n\n# Run this script only if this is the init node.\nif [[ ! -f ${kubeadm_file} ]]; then\n exit 0\nfi\n\necho success > /tmp/kube-proxy-patch\n\ncat \"${dir}/kube-proxy-config.yaml\" >> \"${kubeadm_file}\"\nrm \"${dir}/kube-proxy-config.yaml\"", ContentFrom:(*v1beta1.FileSource)(nil)}, v1beta1.File{Path:"/etc/kube-proxy-config.yaml", Owner:"", Permissions:"", Encoding:"", Append:false, Content:"---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmetricsBindAddress: \"0.0.0.0\"", ContentFrom:(*v1beta1.FileSource)(nil)}, v1beta1.File{Path:"/etc/containerd/conf.d/teuto-mirror.toml", Owner:"", Permissions:"", Encoding:"", Append:false, Content:"[plugins]\n [plugins.\"io.containerd.grpc.v1.cri\".registry]\n [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors]\n [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"docker.io\"]\n endpoint = [\"https://harbor.teuto.net/v2/docker.io\"]\n [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"gcr.io\"]\n endpoint = [\"https://harbor.teuto.net/v2/gcr.io\"]\n [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"ghcr.io\"]\n endpoint = [\"https://harbor.teuto.net/v2/ghcr.io\"]\n [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"hub.docker.com\"]\n endpoint = [\"https://harbor.teuto.net/v2/hub.docker.com\"]\n [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"index.docker.io\"]\n endpoint = [\"https://harbor.teuto.net/v2/index.docker.io\"]\n [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"k8s.gcr.io\"]\n endpoint = [\"https://harbor.teuto.net/v2/k8s.gcr.io\"]\n [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"quay.io\"]\n endpoint = [\"https://harbor.teuto.net/v2/quay.io\"]\n [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"registry.gitlab.com\"]\n endpoint = [\"https://harbor.teuto.net/v2/registry.gitlab.com\"]\n [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"registry.k8s.io\"]\n endpoint = [\"https://harbor.teuto.net/v2/registry.k8s.io\"]\n [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"registry.opensource.zalan.do\"]\n endpoint = [\"https://harbor.teuto.net/v2/registry.opensource.zalan.do\"]\n [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"registry.teuto.io\"]\n endpoint = [\"https://harbor.teuto.net/v2/registry.teuto.io\"]", ContentFrom:(*v1beta1.FileSource)(nil)}}, DiskSetup:(*v1beta1.DiskSetup)(nil), Mounts:[]v1beta1.MountPoints(nil), PreKubeadmCommands:[]string{"bash /etc/kube-proxy-patch.sh"}, PostKubeadmCommands:[]string(nil), Users:[]v1beta1.User(nil), NTP:(*v1beta1.NTP)(nil), Format:"cloud-config", Verbosity:(*int32)(nil), UseExperimentalRetryJoin:false, Ignition:(*v1beta1.IgnitionSpec)(nil)}, RolloutBefore:(*v1beta1.RolloutBefore)(nil), RolloutAfter:, RolloutStrategy:(*v1beta1.RolloutStrategy)(0xc0018bf200), RemediationStrategy:(*v1beta1.RemediationStrategy)(nil)}}}}: KubeadmControlPlaneTemplate spec.template.spec field is immutable. Please create new resource instead.

But even with the simple test-8a985a14 example, which you've tried, the following results in the API after the apply for me;

apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
metadata:
  generation: 1
  name: test-8a985a14
  namespace: 1111-test-cwr-ffm3-2207
  resourceVersion: "507450757"
  uid: d6bb3201-8c18-4eb7-94ad-e72b4364080d
spec:
  template:
    metadata: {}
    spec:
      kubeadmConfigSpec:
        clusterConfiguration:
          apiServer:
            extraArgs:
              cloud-provider: external
          controllerManager: {}
          dns: {}
          etcd: {}
          networking: {}
          scheduler: {}
        format: cloud-config
        initConfiguration:
          localAPIEndpoint: {}
          nodeRegistration:
            imagePullPolicy: IfNotPresent
            kubeletExtraArgs:
              cloud-provider: external
            name: '{{ local_hostname }}'
        joinConfiguration:
          discovery: {}
          nodeRegistration:
            imagePullPolicy: IfNotPresent
            kubeletExtraArgs:
              cloud-provider: external
            name: '{{ local_hostname }}'
      rolloutStrategy:
        rollingUpdate:
          maxSurge: 1
        type: RollingUpdate

Although with the manually created resources the kubectl apply --dry-run=server works.

@killianmuldoon
Copy link
Contributor

killianmuldoon commented Aug 7, 2023

Are you able to replicate the validation issue with the simple test-8a985a14 example? I'm still trying to reproduce but don't seem to be able to with that example or with the fuller 1111-greensta-prod-8a985a14 example.

@cwrau
Copy link
Contributor Author

cwrau commented Aug 8, 2023

Are you able to replicate the validation issue with the simple test-8a985a14 example? I'm still trying to reproduce but don't seem to be able to with that example or with the fuller 1111-greensta-prod-8a985a14 example.

Not yet, really interesting problem 🤔

I haven't managed to get an example with just the KubeadmControlPlaneTemplate to work, only in conjunction with all the other resources, maybe it has something to do with ClusterClasses? That's also a difference the working and non-working KubeadmControlPlaneTemplates have.

I have the greensta example deployed via helm chart;

---
# Source: t8s-cluster/templates/management-cluster/clusterClass/kubeadmnControlPlaneTemplate/kubeadmControlPlaneTemplate.yaml
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
metadata:
  name: 1111-greensta-prod-8a985a14
  namespace: 1111-greensta-prod
  labels:
    app.kubernetes.io/name: t8s-cluster
    helm.sh/chart: t8s-cluster-1.3.3
    app.kubernetes.io/instance: 1111-greensta-prod
    app.kubernetes.io/managed-by: Helm
spec:
  template:
    spec:
      # the full context is needed for .Files.Get
      kubeadmConfigSpec:
        clusterConfiguration:
          apiServer:
            extraArgs:
              admission-control-config-file: &admissionControlConfigFilePath /etc/kubernetes/admission-control-config.yaml
              cloud-provider: external
              enable-admission-plugins: AlwaysPullImages,EventRateLimit,NodeRestriction
              profiling: "false"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            extraVolumes:
              - hostPath: *admissionControlConfigFilePath
                mountPath: *admissionControlConfigFilePath
                name: admission-control-config
                readOnly: true
              - hostPath: &eventRateLimitConfigFilePath /etc/kubernetes/event-rate-limit-config.yaml
                mountPath: *eventRateLimitConfigFilePath
                name: event-rate-limit-config
                readOnly: true
          controllerManager:
            extraArgs:
              authorization-always-allow-paths: /healthz,/readyz,/livez,/metrics
              bind-address: 0.0.0.0
              cloud-provider: external
              profiling: "false"
              terminated-pod-gc-threshold: "100"
          etcd:
            local:
              extraArgs:
                listen-metrics-urls: http://0.0.0.0:2381
          scheduler:
            extraArgs:
              authorization-always-allow-paths: /healthz,/readyz,/livez,/metrics
              bind-address: 0.0.0.0
              profiling: "false"
        files:
          - content: |-
              apiVersion: apiserver.config.k8s.io/v1
              kind: AdmissionConfiguration
              plugins:
                - name: EventRateLimit
                  path: event-rate-limit-config.yaml
            path: *admissionControlConfigFilePath
          - content: |-
              apiVersion: eventratelimit.admission.k8s.io/v1alpha1
              kind: Configuration
              limits:
                - type: Namespace
                  qps: 50
                  burst: 100
                - type: SourceAndObject
                  qps: 10
                  burst: 50
            path: *eventRateLimitConfigFilePath
          - content: |-
              #!/usr/bin/env bash

              #
              # (PK) I couldn't find a better/simpler way to conifgure it. See:
              # https://github.com/kubernetes-sigs/cluster-api/issues/4512
              #

              set -o errexit
              set -o nounset
              set -o pipefail

              dir=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
              readonly dir

              # Exit fast if already appended.
              if [[ ! -f ${dir}/kube-proxy-config.yaml ]]; then
                exit 0
              fi

              # kubeadm config is in different directory in Flatcar (/etc) and Ubuntu (/run/kubeadm).
              kubeadm_file="/etc/kubeadm.yml"
              if [[ ! -f ${kubeadm_file} ]]; then
                kubeadm_file="/run/kubeadm/kubeadm.yaml"
              fi

              # Run this script only if this is the init node.
              if [[ ! -f ${kubeadm_file} ]]; then
                exit 0
              fi

              echo success > /tmp/kube-proxy-patch

              cat "${dir}/kube-proxy-config.yaml" >> "${kubeadm_file}"
              rm "${dir}/kube-proxy-config.yaml"

            path: /etc/kube-proxy-patch.sh
            permissions: "0700"
          - content: |-
              ---
              apiVersion: kubeproxy.config.k8s.io/v1alpha1
              kind: KubeProxyConfiguration
              metricsBindAddress: "0.0.0.0"

            path: /etc/kube-proxy-config.yaml
          - content: |-
              [plugins]
                [plugins."io.containerd.grpc.v1.cri".registry]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
                      endpoint = ["https://harbor.teuto.net/v2/docker.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
                      endpoint = ["https://harbor.teuto.net/v2/gcr.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."ghcr.io"]
                      endpoint = ["https://harbor.teuto.net/v2/ghcr.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."hub.docker.com"]
                      endpoint = ["https://harbor.teuto.net/v2/hub.docker.com"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."index.docker.io"]
                      endpoint = ["https://harbor.teuto.net/v2/index.docker.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
                      endpoint = ["https://harbor.teuto.net/v2/k8s.gcr.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
                      endpoint = ["https://harbor.teuto.net/v2/quay.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.gitlab.com"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.gitlab.com"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.k8s.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.opensource.zalan.do"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.opensource.zalan.do"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.teuto.io"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.teuto.io"]
            path: /etc/containerd/conf.d/teuto-mirror.toml
        initConfiguration:
          nodeRegistration:
            kubeletExtraArgs:
              cloud-provider: external
              event-qps: "0"
              feature-gates: SeccompDefault=true
              protect-kernel-defaults: "true"
              seccomp-default: "true"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            name: "{{ local_hostname }}"
        joinConfiguration:
          nodeRegistration:
            kubeletExtraArgs:
              cloud-provider: external
              event-qps: "0"
              feature-gates: SeccompDefault=true
              protect-kernel-defaults: "true"
              seccomp-default: "true"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            name: "{{ local_hostname }}"
        preKubeadmCommands:
          - bash /etc/kube-proxy-patch.sh

the same resource in the API;

apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
metadata:
  annotations:
    meta.helm.sh/release-name: 1111-greensta-prod
    meta.helm.sh/release-namespace: 1111-greensta-prod
  creationTimestamp: "2023-05-08T12:01:05Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: 1111-greensta-prod
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: t8s-cluster
    helm.sh/chart: t8s-cluster-1.3.3
    helm.toolkit.fluxcd.io/name: 1111-greensta-prod
    helm.toolkit.fluxcd.io/namespace: 1111-greensta-prod
  name: 1111-greensta-prod-8a985a14
  namespace: 1111-greensta-prod
  ownerReferences:
    - apiVersion: cluster.x-k8s.io/v1beta1
      kind: ClusterClass
      name: 1111-greensta-prod
      uid: fd265dfa-88fe-48de-a0eb-131a50b1dbfc
  resourceVersion: "412166616"
  uid: a75075d8-f783-4671-8c4e-c7c334d168ca
spec:
  template:
    spec:
      kubeadmConfigSpec:
        clusterConfiguration:
          apiServer:
            extraArgs:
              admission-control-config-file: /etc/kubernetes/admission-control-config.yaml
              cloud-provider: external
              enable-admission-plugins: AlwaysPullImages,EventRateLimit,NodeRestriction
              profiling: "false"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            extraVolumes:
              - hostPath: /etc/kubernetes/admission-control-config.yaml
                mountPath: /etc/kubernetes/admission-control-config.yaml
                name: admission-control-config
                readOnly: true
              - hostPath: /etc/kubernetes/event-rate-limit-config.yaml
                mountPath: /etc/kubernetes/event-rate-limit-config.yaml
                name: event-rate-limit-config
                readOnly: true
          controllerManager:
            extraArgs:
              authorization-always-allow-paths: /healthz,/readyz,/livez,/metrics
              bind-address: 0.0.0.0
              cloud-provider: external
              profiling: "false"
              terminated-pod-gc-threshold: "100"
          dns: {}
          etcd:
            local:
              extraArgs:
                listen-metrics-urls: http://0.0.0.0:2381
          networking: {}
          scheduler:
            extraArgs:
              authorization-always-allow-paths: /healthz,/readyz,/livez,/metrics
              bind-address: 0.0.0.0
              profiling: "false"
        files:
          - content: |-
              apiVersion: apiserver.config.k8s.io/v1
              kind: AdmissionConfiguration
              plugins:
                - name: EventRateLimit
                  path: event-rate-limit-config.yaml
            path: /etc/kubernetes/admission-control-config.yaml
          - content: |-
              apiVersion: eventratelimit.admission.k8s.io/v1alpha1
              kind: Configuration
              limits:
                - type: Namespace
                  qps: 50
                  burst: 100
                - type: SourceAndObject
                  qps: 10
                  burst: 50
            path: /etc/kubernetes/event-rate-limit-config.yaml
          - content: |-
              #!/usr/bin/env bash

              #
              # (PK) I couldn't find a better/simpler way to conifgure it. See:
              # https://github.com/kubernetes-sigs/cluster-api/issues/4512
              #

              set -o errexit
              set -o nounset
              set -o pipefail

              dir=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
              readonly dir

              # Exit fast if already appended.
              if [[ ! -f ${dir}/kube-proxy-config.yaml ]]; then
                exit 0
              fi

              # kubeadm config is in different directory in Flatcar (/etc) and Ubuntu (/run/kubeadm).
              kubeadm_file="/etc/kubeadm.yml"
              if [[ ! -f ${kubeadm_file} ]]; then
                kubeadm_file="/run/kubeadm/kubeadm.yaml"
              fi

              # Run this script only if this is the init node.
              if [[ ! -f ${kubeadm_file} ]]; then
                exit 0
              fi

              echo success > /tmp/kube-proxy-patch

              cat "${dir}/kube-proxy-config.yaml" >> "${kubeadm_file}"
              rm "${dir}/kube-proxy-config.yaml"
            path: /etc/kube-proxy-patch.sh
            permissions: "0700"
          - content: |-
              ---
              apiVersion: kubeproxy.config.k8s.io/v1alpha1
              kind: KubeProxyConfiguration
              metricsBindAddress: "0.0.0.0"
            path: /etc/kube-proxy-config.yaml
          - content: |-
              [plugins]
                [plugins."io.containerd.grpc.v1.cri".registry]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
                      endpoint = ["https://harbor.teuto.net/v2/docker.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
                      endpoint = ["https://harbor.teuto.net/v2/gcr.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."ghcr.io"]
                      endpoint = ["https://harbor.teuto.net/v2/ghcr.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."hub.docker.com"]
                      endpoint = ["https://harbor.teuto.net/v2/hub.docker.com"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."index.docker.io"]
                      endpoint = ["https://harbor.teuto.net/v2/index.docker.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
                      endpoint = ["https://harbor.teuto.net/v2/k8s.gcr.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
                      endpoint = ["https://harbor.teuto.net/v2/quay.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.gitlab.com"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.gitlab.com"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.k8s.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.opensource.zalan.do"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.opensource.zalan.do"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.teuto.io"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.teuto.io"]
            path: /etc/containerd/conf.d/teuto-mirror.toml
        format: cloud-config
        initConfiguration:
          localAPIEndpoint: {}
          nodeRegistration:
            kubeletExtraArgs:
              cloud-provider: external
              event-qps: "0"
              feature-gates: SeccompDefault=true
              protect-kernel-defaults: "true"
              seccomp-default: "true"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            name: "{{ local_hostname }}"
        joinConfiguration:
          discovery: {}
          nodeRegistration:
            kubeletExtraArgs:
              cloud-provider: external
              event-qps: "0"
              feature-gates: SeccompDefault=true
              protect-kernel-defaults: "true"
              seccomp-default: "true"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            name: "{{ local_hostname }}"
        preKubeadmCommands:
          - bash /etc/kube-proxy-patch.sh
      rolloutStrategy:
        rollingUpdate:
          maxSurge: 1
        type: RollingUpdate

If I try to apply this it fails.


I then renamed the local file with the -2 suffix and applied it which resulted in this resource in the API;

apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
metadata:
  creationTimestamp: "2023-08-08T08:56:17Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: 1111-greensta-prod
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: t8s-cluster
    helm.sh/chart: t8s-cluster-1.3.3
  name: 1111-greensta-prod-8a985a14
  namespace: 1111-greensta-prod
  resourceVersion: "471461869"
  uid: 2b38fc3f-128b-4981-84f3-50f82734fab6
spec:
  template:
    metadata: {}
    spec:
      kubeadmConfigSpec:
        clusterConfiguration:
          apiServer:
            extraArgs:
              admission-control-config-file: /etc/kubernetes/admission-control-config.yaml
              cloud-provider: external
              enable-admission-plugins: AlwaysPullImages,EventRateLimit,NodeRestriction
              profiling: "false"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            extraVolumes:
              - hostPath: /etc/kubernetes/admission-control-config.yaml
                mountPath: /etc/kubernetes/admission-control-config.yaml
                name: admission-control-config
                readOnly: true
              - hostPath: /etc/kubernetes/event-rate-limit-config.yaml
                mountPath: /etc/kubernetes/event-rate-limit-config.yaml
                name: event-rate-limit-config
                readOnly: true
          controllerManager:
            extraArgs:
              authorization-always-allow-paths: /healthz,/readyz,/livez,/metrics
              bind-address: 0.0.0.0
              cloud-provider: external
              profiling: "false"
              terminated-pod-gc-threshold: "100"
          dns: {}
          etcd:
            local:
              extraArgs:
                listen-metrics-urls: http://0.0.0.0:2381
          networking: {}
          scheduler:
            extraArgs:
              authorization-always-allow-paths: /healthz,/readyz,/livez,/metrics
              bind-address: 0.0.0.0
              profiling: "false"
        files:
          - content: |-
              apiVersion: apiserver.config.k8s.io/v1
              kind: AdmissionConfiguration
              plugins:
                - name: EventRateLimit
                  path: event-rate-limit-config.yaml
            path: /etc/kubernetes/admission-control-config.yaml
          - content: |-
              apiVersion: eventratelimit.admission.k8s.io/v1alpha1
              kind: Configuration
              limits:
                - type: Namespace
                  qps: 50
                  burst: 100
                - type: SourceAndObject
                  qps: 10
                  burst: 50
            path: /etc/kubernetes/event-rate-limit-config.yaml
          - content: |-
              #!/usr/bin/env bash

              #
              # (PK) I couldn't find a better/simpler way to conifgure it. See:
              # https://github.com/kubernetes-sigs/cluster-api/issues/4512
              #

              set -o errexit
              set -o nounset
              set -o pipefail

              dir=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
              readonly dir

              # Exit fast if already appended.
              if [[ ! -f ${dir}/kube-proxy-config.yaml ]]; then
                exit 0
              fi

              # kubeadm config is in different directory in Flatcar (/etc) and Ubuntu (/run/kubeadm).
              kubeadm_file="/etc/kubeadm.yml"
              if [[ ! -f ${kubeadm_file} ]]; then
                kubeadm_file="/run/kubeadm/kubeadm.yaml"
              fi

              # Run this script only if this is the init node.
              if [[ ! -f ${kubeadm_file} ]]; then
                exit 0
              fi

              echo success > /tmp/kube-proxy-patch

              cat "${dir}/kube-proxy-config.yaml" >> "${kubeadm_file}"
              rm "${dir}/kube-proxy-config.yaml"
            path: /etc/kube-proxy-patch.sh
            permissions: "0700"
          - content: |-
              ---
              apiVersion: kubeproxy.config.k8s.io/v1alpha1
              kind: KubeProxyConfiguration
              metricsBindAddress: "0.0.0.0"
            path: /etc/kube-proxy-config.yaml
          - content: |-
              [plugins]
                [plugins."io.containerd.grpc.v1.cri".registry]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
                      endpoint = ["https://harbor.teuto.net/v2/docker.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
                      endpoint = ["https://harbor.teuto.net/v2/gcr.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."ghcr.io"]
                      endpoint = ["https://harbor.teuto.net/v2/ghcr.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."hub.docker.com"]
                      endpoint = ["https://harbor.teuto.net/v2/hub.docker.com"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."index.docker.io"]
                      endpoint = ["https://harbor.teuto.net/v2/index.docker.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
                      endpoint = ["https://harbor.teuto.net/v2/k8s.gcr.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
                      endpoint = ["https://harbor.teuto.net/v2/quay.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.gitlab.com"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.gitlab.com"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.k8s.io"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.opensource.zalan.do"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.opensource.zalan.do"]
                    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.teuto.io"]
                      endpoint = ["https://harbor.teuto.net/v2/registry.teuto.io"]
            path: /etc/containerd/conf.d/teuto-mirror.toml
        format: cloud-config
        initConfiguration:
          localAPIEndpoint: {}
          nodeRegistration:
            imagePullPolicy: IfNotPresent
            kubeletExtraArgs:
              cloud-provider: external
              event-qps: "0"
              feature-gates: SeccompDefault=true
              protect-kernel-defaults: "true"
              seccomp-default: "true"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            name: "{{ local_hostname }}"
        joinConfiguration:
          discovery: {}
          nodeRegistration:
            imagePullPolicy: IfNotPresent
            kubeletExtraArgs:
              cloud-provider: external
              event-qps: "0"
              feature-gates: SeccompDefault=true
              protect-kernel-defaults: "true"
              seccomp-default: "true"
              tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
            name: "{{ local_hostname }}"
        preKubeadmCommands:
          - bash /etc/kube-proxy-patch.sh
      rolloutStrategy:
        rollingUpdate:
          maxSurge: 1
        type: RollingUpdate

This then works to apply.

Diff between the one which fails to apply and the one that works;

--- /tmp/gr-real.yaml	2023-08-08 11:01:23.541800765 +0200
+++ /tmp/gr-real-2.yaml	2023-08-08 11:01:22.795786027 +0200
@@ -1,29 +1,20 @@
 apiVersion: controlplane.cluster.x-k8s.io/v1beta1
 kind: KubeadmControlPlaneTemplate
 metadata:
-  annotations:
-    meta.helm.sh/release-name: 1111-greensta-prod
-    meta.helm.sh/release-namespace: 1111-greensta-prod
-  creationTimestamp: "2023-05-08T12:01:05Z"
+  creationTimestamp: "2023-08-08T08:56:17Z"
   generation: 1
   labels:
     app.kubernetes.io/instance: 1111-greensta-prod
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/name: t8s-cluster
     helm.sh/chart: t8s-cluster-1.3.3
-    helm.toolkit.fluxcd.io/name: 1111-greensta-prod
-    helm.toolkit.fluxcd.io/namespace: 1111-greensta-prod
   name: 1111-greensta-prod-8a985a14
   namespace: 1111-greensta-prod
-  ownerReferences:
-    - apiVersion: cluster.x-k8s.io/v1beta1
-      kind: ClusterClass
-      name: 1111-greensta-prod
-      uid: fd265dfa-88fe-48de-a0eb-131a50b1dbfc
-  resourceVersion: "412166616"
-  uid: a75075d8-f783-4671-8c4e-c7c334d168ca
+  resourceVersion: "471461869"
+  uid: 2b38fc3f-128b-4981-84f3-50f82734fab6
 spec:
   template:
+    metadata: {}
     spec:
       kubeadmConfigSpec:
         clusterConfiguration:
@@ -154,6 +145,7 @@
         initConfiguration:
           localAPIEndpoint: {}
           nodeRegistration:
+            imagePullPolicy: IfNotPresent
             kubeletExtraArgs:
               cloud-provider: external
               event-qps: "0"
@@ -165,6 +157,7 @@
         joinConfiguration:
           discovery: {}
           nodeRegistration:
+            imagePullPolicy: IfNotPresent
             kubeletExtraArgs:
               cloud-provider: external
               event-qps: "0"

So there are some differences between that manually created resource and the one applied via the chart 🤔

@mnaser
Copy link

mnaser commented Aug 9, 2023

I was working with @okozachenko1203 on this and I'm pretty sure this is the commit that broke this I think:

f7fe7da

Looks like here we have defaulting that happens after the fact.

@killianmuldoon
Copy link
Contributor

I was working with @okozachenko1203 on this and I'm pretty sure this is the commit that broke this I think:

Can you share a reproducible example and the full error output you get? I'm still not able to reproduce this. what makes you think that commit is responsible?

@sbueringer
Copy link
Member

Hm this commit just moved the defaulting from one place to another. DefaultKubeadmConfigSpec is called in the defaulting (aka mutating) webhook.

This could explain a change in behavior if you are not using our webhooks though

@okozachenko1203
Copy link

I was working with @okozachenko1203 on this and I'm pretty sure this is the commit that broke this I think:

Can you share a reproducible example and the full error output you get? I'm still not able to reproduce this. what makes you think that commit is responsible?

We create a kcptemplate CR with server-side apply. This CR doesn't include ImagePullPolicy in its spec at the first creation. Then when attempt to apply for the next time, the default values are gonna applied and it means spec change which leads to the failure.

The workaround is add default: IfNotPresent in CRDs again.

@okozachenko1203
Copy link

I think this condition is wrong so DefaultKubeadmConfigSpec is not applied at the first apply

f7fe7da#diff-df618523392e7dbd47e9a38969ab637849d31825becc8a6688429df3ffadeaddR61-R66

@sbueringer
Copy link
Member

sbueringer commented Aug 9, 2023

Sorry don't have the overview over the entire issue right now. The OpenAPI defaulting is only applied when the parent element (nodeRegistration) exists. While in the webhook now it is already enough if init/joinConfiguration exists.

Does this explain what you're seeing?

@okozachenko1203
Copy link

Sorry don't have the overview over the entire issue right now. The OpenAPI defaulting is only applied when the parent element (nodeRegistration) exists. While in the webhook now it is already enough if init/joinConfiguration exists.

Does this explain what you're seeing?

surely it is included in the manifest. Here is how we generate the manifest https://github.com/vexxhost/magnum-cluster-api/blob/550721dbdb0a5251931f9b9071962b822a3ccbac/magnum_cluster_api/resources.py#L533-L595

@fabriziopandini
Copy link
Member

/priorty awaiting-more-evidence

@fabriziopandini fabriziopandini added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Apr 11, 2024
@sbueringer
Copy link
Member

sbueringer commented Apr 12, 2024

Looks like they implemented something like this for CRD OpenAPI validation (If I got it right)

KEP 4008: CRD Validation Ratcheting
This KEP proposes to allow custom resources with failing validations to pass if a patch does not alter any of the invalid fields. Currently, validation of unchanged fields stands as a barrier for both CRD authors and Kubernetes developers. This KEP proposes the CRDValidationRatcheting feature flag, which when enabled allows updates to custom resources that fail validation to succeed, if the validation errors when on unchanged keypaths. This makes it easier to change CRD validations without breaking existing workflows.

@fabriziopandini
Copy link
Member

triage-party:

/close
we are still not at the point where we can reproduce, and the issue is not getting updates since Aug 23

@k8s-ci-robot
Copy link
Contributor

@fabriziopandini: Closing this issue.

In response to this:

triage-party:

/close
we are still not at the point where we can reproduce, and the issue is not getting updates since Aug 23

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sbueringer
Copy link
Member

sbueringer commented Apr 16, 2024

Additional note: I think the key to figuring this out is to figure out how it is possible to create a KubeadmControlPlaneTemplat without the imagePullPolicy fields being set. This should be impossible with our defaulting logic.

(If you can give us a reproducible example on how to deploy a KubeadmControlPlaneTemplate without these fields set we can investigate)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

7 participants