Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add StatefulSet override #143

Closed
ChunyiLyu opened this issue May 27, 2020 · 2 comments · Fixed by #175
Closed

Add StatefulSet override #143

ChunyiLyu opened this issue May 27, 2020 · 2 comments · Fixed by #175
Assignees

Comments

@ChunyiLyu
Copy link
Contributor

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

@ChunyiLyu ChunyiLyu added the WIP This issue hasn't been discussed or scheduled. Do not work on it. label May 27, 2020
@mkuratczyk
Copy link
Collaborator

Results of my testing.

  1. I wanted to use hostNetwork. It was possible to achieve but required some YAML I did not expect (selector and serviceName - there are no changes to these values but they need to appear in the StatefulSet spec). Definition used:
apiVersion: rabbitmq.pivotal.io/v1beta1
kind: RabbitmqCluster
metadata:
  name: host-network
spec:
  replicas: 3
  rabbitmqClusterOverwriteSpec:
    statefulSetOverwrite:
      spec:
        selector:
          matchLabels:
            app.kubernetes.io/name: host-network
        serviceName: host-network-rabbitmq-headless
        template:
          spec:
            hostNetwork: false
            containers:
            - name: rabbitmq
              ports:
              - containerPort: 4369
                hostPort: 4369
                name: epmd
                protocol: TCP
              - containerPort: 5672
                hostPort: 5672
                name: amqp
                protocol: TCP
              - containerPort: 15672
                hostPort: 15672
                name: http
                protocol: TCP
              - containerPort: 15692
                hostPort: 15692
                name: prometheus
                protocol: TCP
  1. I wanted to mount additional volumes to the StatefulSet. That was not possible.
    a. I expected an item in volumeClaimTemplates to be appended to the default persistence disks since the name is different but changes to the extra-persistence spec were actually applied to the persistence volume and no additional volume was created.
    b. Changes to the metadata are ignored (eg. new labels, foo:bar in the example below)
    c. containers: [] is required. I understand why but that's something we may want to change.
    Definition used:
apiVersion: rabbitmq.pivotal.io/v1beta1
kind: RabbitmqCluster
metadata:
  name: disks
spec:
  replicas: 1
  rabbitmqClusterOverwriteSpec:
    statefulSetOverwrite:
      spec:
        selector:
          matchLabels:
            app.kubernetes.io/name: disks
        serviceName: disks-rabbitmq-headless
        template:
          spec:
            containers: []
        volumeClaimTemplates:
        - apiVersion: v1
          kind: PersistentVolumeClaim
          metadata:
            labels:
              app.kubernetes.io/component: extra-rabbitmq
              app.kubernetes.io/name: disks
              app.kubernetes.io/part-of: pivotal-rabbitmq
              foo: bar
            name: extra-persistence
            namespace: default
          spec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 5Gi
            volumeMode: Filesystem
  1. As a workaround to the volumeClaimTemplate spec being applied to the default PVC, rather than creating a new one, I tried what happens if I add another one. That fails with an incorrect name of a PVC - probably because the metadata is ignored like before, but this time there is no default prefix for the PVC. The effect is a StatefulSet that is not ready and provides the following hint when described:
  Warning  FailedCreate  13m (x103 over 23h)  statefulset-controller  create Claim -disks-rabbitmq-server-0 for Pod disks-rabbitmq-server-0 in StatefulSet disks-rabbitmq-server failed error: PersistentVolumeClaim "-disks-rabbitmq-server-0" is invalid: metadata.name: Invalid value: "-disks-rabbitmq-server-0": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

Definition used:

apiVersion: rabbitmq.pivotal.io/v1beta1
kind: RabbitmqCluster
metadata:
  name: disks
spec:
  replicas: 1
  rabbitmqClusterOverwriteSpec:
    statefulSetOverwrite:
      spec:
        selector:
          matchLabels:
            app.kubernetes.io/name: disks
        serviceName: disks-rabbitmq-headless
        template:
          spec:
            containers: []
        volumeClaimTemplates:
        - apiVersion: v1
          kind: PersistentVolumeClaim
          metadata:
            labels:
              app.kubernetes.io/component: extra-rabbitmq
              app.kubernetes.io/name: disks
              app.kubernetes.io/part-of: pivotal-rabbitmq
              foo: bar
            name: extra-persistence
            namespace: default
          spec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 5Gi
            volumeMode: Filesystem
        - apiVersion: v1
          kind: PersistentVolumeClaim
          metadata:
            labels:
              app.kubernetes.io/component: foo-rabbitmq
              app.kubernetes.io/name: disks
              app.kubernetes.io/part-of: pivotal-rabbitmq
              foo: bar
            name: foo-persistence
            namespace: default
          spec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 1Gi
            volumeMode: Filesystem

@mkuratczyk
Copy link
Collaborator

One more comment - this crd-refactor-quick branch doesn't even start on kind 1.18:

kubectl apply -f config/crd/bases
The CustomResourceDefinition "rabbitmqclusters.rabbitmq.pivotal.io" is invalid:
* spec.validation.openAPIV3Schema.properties[spec].properties[rabbitmqClusterOverwriteSpec].properties[ingressServiceOverwrite].properties[spec].properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property
* spec.validation.openAPIV3Schema.properties[spec].properties[rabbitmqClusterOverwriteSpec].properties[statefulSetOverwrite].properties[spec].properties[template].properties[spec].properties[initContainers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property
* spec.validation.openAPIV3Schema.properties[spec].properties[rabbitmqClusterOverwriteSpec].properties[statefulSetOverwrite].properties[spec].properties[template].properties[spec].properties[containers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property
make: *** [install] Error 1

@ChunyiLyu ChunyiLyu removed the WIP This issue hasn't been discussed or scheduled. Do not work on it. label Jun 25, 2020
@ChunyiLyu ChunyiLyu self-assigned this Jun 25, 2020
ChunyiLyu added a commit that referenced this issue Jul 10, 2020
ChunyiLyu added a commit that referenced this issue Jul 13, 2020
- related to issue #143
- add reasoning about why the CRD won't install in a 1.18 k8s
cluster
- move Override KEP into proposals/implemented
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants