Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] when applying clusters of the same name but for differenet addons, KB goes crash #8912

Closed
shanshanying opened this issue Feb 13, 2025 · 0 comments · Fixed by #8913
Closed
Assignees
Labels
kind/bug Something isn't working
Milestone

Comments

@shanshanying
Copy link
Contributor

shanshanying commented Feb 13, 2025

Versions

Kubernetes: v1.29.2
KubeBlocks: 0.9.3-beta.16
kbcli: 0.9.1

To Reproduce
Steps to reproduce the behavior:

  1. create an apecloud-mysql cluster
cat <<EOF | kubectl apply -f -
apiVersion: apps.kubeblocks.io/v1alpha1
kind: Cluster
metadata:
  name: mycluster
  namespace: demo
spec:
  terminationPolicy: Delete
  componentSpecs:
  - name: mysql
    componentDef: apecloud-mysql
    affinity:
      podAntiAffinity: Preferred
      topologyKeys:
      - kubernetes.io/hostname
      tenancy: SharedNode
    tolerations:
    - key: kb-data
      operator: Equal
      value: 'true'
      effect: NoSchedule
    enabledLogs:
    - error
    - general
    - slow
    disableExporter: true
    replicas: 2
    resources:
      limits:
        cpu: '0.5'
        memory: 0.5Gi
      requests:
        cpu: '0.5'
        memory: 0.5Gi
    volumeClaimTemplates:
    - name: data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 20Gi
EOF
  1. when it is running, applying another cluster OF THE SAME NAME
cat <<EOF | kubectl apply -f -
apiVersion: apps.kubeblocks.io/v1alpha1
kind: Cluster
metadata:
  name: mycluster
  namespace: demo
spec:
  clusterDefinitionRef: starrocks-ce
  terminationPolicy: Delete
  topology: shared-nothing
  tolerations:
    - key: kb-data
      operator: Equal
      value: 'true'
      effect: NoSchedule
  componentSpecs:
    - name: fe
      componentDef: starrocks-ce-fe
      serviceVersion: 3.3.0
      replicas: 1
      resources:
        limits:
          cpu: "1"
          memory: "1Gi"
        requests:
          cpu: "1"
          memory: "1Gi"
      volumeClaimTemplates:
        - name: data # ref clusterDefinition components.containers.volumeMounts.name
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 20Gi
    - name: be
      componentDef: starrocks-ce-be
      serviceVersion: 3.3.0
      replicas: 1
      resources:
        limits:
          cpu: "1"
          memory: "1Gi"
        requests:
          cpu: "1"
          memory: "1Gi"
      volumeClaimTemplates:
      - name: data
        spec:
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: "20Gi"
EOF

Then KB goes creash with logs:

2025-02-13T02:12:20.671Z	INFO	Observed a panic in reconciler: runtime error: cannot find order for components mysql	{"controller": "cluster", "controllerGroup": "apps.kubeblocks.io", "controllerKind": "Cluster", "Cluster": {"name":"mycluster","namespace":"demo"}, "namespace": "demo", "name": "mycluster", "reconcileID": "1bd53192-c396-4fb5-99b3-b25493eaea6d"}
panic: runtime error: cannot find order for components mysql [recovered]
	panic: runtime error: cannot find order for components mysql

goroutine 932 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.2/pkg/internal/controller/controller.go:116 +0x1a4
panic({0x2115500?, 0x4002fce800?})
	/usr/local/go/src/runtime/panic.go:914 +0x218
github.com/apecloud/kubeblocks/controllers/apps.(*compOrderedOrder).ordered(0x40035ab760, {0x4002fce7d0?, 0x1, 0x1})
	/src/controllers/apps/transformer_cluster_component.go:402 +0x160
github.com/apecloud/kubeblocks/controllers/apps.handleCompsInOrder(0x4002d1db90?, 0x4002d1db90?, 0x0?, {0x2d54cd8, 0x40035ab760})
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants