You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
after the deployment is finished the kafka-kafka-0 restarts
looking at reloader logs, it detected a configmap change.
>kubectl logs kudo-controller-manager-0 -n kudo-system reloader
time="2019-11-04T16:16:28Z" level=info msg="Changes detected in 'kafka-metrics-config' of type 'CONFIGMAP' in namespace 'default'"
time="2019-11-04T16:16:28Z" level=info msg="Updated 'kafka-kafka' of type 'StatefulSet' in namespace 'default'"
time="2019-11-04T16:16:28Z" level=info msg="Changes detected in 'kafka-serverproperties' of type 'CONFIGMAP' in namespace 'default'"
time="2019-11-04T16:16:28Z" level=info msg="Updated 'kafka-kafka' of type 'StatefulSet' in namespace 'default'"
Anything else we need to know?:
we have multiple controllerrevisions
kubectl get controllerrevisions.apps
NAME CONTROLLER REVISION AGE
kafka-kafka-8c8ff58b8 statefulset.apps/kafka-kafka 2 6m52s
kafka-kafka-cbc597876 statefulset.apps/kafka-kafka 1 6m52s
zk-zookeeper-86667d6569 statefulset.apps/zk-zookeeper 1 9m41s
What happened:
Using the annotation
reloader.kudo.dev/auto: "true"
restarts pod when thecm
isn't updatedWhat you expected to happen:
only rolling restart the statefulsets when a
cm
is updatedHow to reproduce it (as minimally and precisely as possible):
Use the
master
branch ofoperators
and kudo versionv0.8.0-pre.2
and install Kafka
watch the pods
after the deployment is finished the kafka-kafka-0 restarts
looking at reloader logs, it detected a configmap change.
Anything else we need to know?:
we have multiple controllerrevisions
the difference between the controller-revisions
Environment:
kubectl version
): v1.15.5kubectl kudo version
): v0.8.0-pre.2uname -a
):The text was updated successfully, but these errors were encountered: