You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 8, 2022. It is now read-only.
I'm seeing i/o errors with the Kubewatch pod trying to talk to the master node. Kubernetes 1.11.5 installation running locally on CentOS 7. Have tried installing Kubewatch using Helm as well as using kubectl. Pod is running, but seeing same log messages with i/o timeout as the previous user had reported and no messages being sent to Slack.
[root@kube-acitest-3 ~]# kubectl get serviceaccount kubewatch -n monitoring
NAME SECRETS AGE
kubewatch 1 5h
[root@kube-acitest-3 ~]# kubectl get clusterrole kubewatch -n monitoring
NAME AGE
kubewatch 47m
[root@kube-acitest-3 ~]# kubectl get clusterrolebinding kubewatch -n monitoring
NAME AGE
kubewatch 47m
[root@kube-acitest-3 ~]# kubectl logs -f kubewatch kubewatch -n monitoring | more
==> Writing config file...
time="2018-12-16T19:49:44Z" level=info msg="Starting kubewatch controller" pkg=kubewatch-pod
time="2018-12-16T19:49:44Z" level=info msg="Starting kubewatch controller" pkg=kubewatch-service
time="2018-12-16T19:49:44Z" level=info msg="Starting kubewatch controller" pkg=kubewatch-deployment
time="2018-12-16T19:49:44Z" level=info msg="Starting kubewatch controller" pkg=kubewatch-namespace
ERROR: logging before flag.Parse: E1216 19:50:14.042285 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E1216 19:50:14.042418 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E1216 19:50:14.042516 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1beta1.Deployment: Get https://10.96.0.1:443/apis/apps/v1beta1/deployments?limit=500&resourceVersion=0: dial tcp 10.96.
0.1:443: i/o timeout
ERROR: logging before flag.Parse: E1216 19:50:14.042591 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeo
ut
ERROR: logging before flag.Parse: E1216 19:50:45.042997 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E1216 19:50:45.046896 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E1216 19:50:45.048370 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1beta1.Deployment: Get https://10.96.0.1:443/apis/apps/v1beta1/deployments?limit=500&resourceVersion=0: dial tcp 10.96.
0.1:443: i/o timeout
ERROR: logging before flag.Parse: E1216 19:50:45.050598 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeo
ut
The text was updated successfully, but these errors were encountered:
so there need to be some retry mechanism in kubewatch to reconnect on timeout
however, there's still a chance to lose some events between the reconnection if it matters.
hello,
any news, i have the same error, example: ERROR: logging before flag.Parse: E0109 14:30:07.080518 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.go:377: Failed to list *v1.Pod: Get https://172.20.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 172.20.0.1:443: i/o timeout
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I'm seeing i/o errors with the Kubewatch pod trying to talk to the master node. Kubernetes 1.11.5 installation running locally on CentOS 7. Have tried installing Kubewatch using Helm as well as using kubectl. Pod is running, but seeing same log messages with i/o timeout as the previous user had reported and no messages being sent to Slack.
[root@kube-acitest-3 ~]# kubectl get pod kubewatch -n monitoring -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
opflex.cisco.com/computed-endpoint-group: '{"policy-space":"kubeacitest","name":"kubernetes|kube-default"}'
opflex.cisco.com/computed-security-group: '[]'
creationTimestamp: 2018-12-16T19:49:41Z
name: kubewatch
namespace: monitoring
resourceVersion: "929213"
selfLink: /api/v1/namespaces/monitoring/pods/kubewatch
uid: ba92e4d3-016b-11e9-a743-005056863a6e
spec:
containers:
imagePullPolicy: Always
name: kubewatch
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
name: config-volume
name: kubewatch-token-wgwv4
readOnly: true
image: bitnami/kubectl:latest
imagePullPolicy: Always
name: proxy
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
name: kubewatch-token-wgwv4
readOnly: true
dnsPolicy: ClusterFirst
nodeName: kube-acitest-4
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: kubewatch
serviceAccountName: kubewatch
terminationGracePeriodSeconds: 30
tolerations:
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
defaultMode: 420
name: kubewatch
name: config-volume
secret:
defaultMode: 420
secretName: kubewatch-token-wgwv4
status:
conditions:
lastTransitionTime: 2018-12-16T19:49:41Z
status: "True"
type: Initialized
lastTransitionTime: 2018-12-16T19:49:46Z
status: "True"
type: Ready
lastTransitionTime: null
status: "True"
type: ContainersReady
lastTransitionTime: 2018-12-16T19:49:41Z
status: "True"
type: PodScheduled
containerStatuses:
image: bitnami/kubewatch:0.0.4
imageID: docker-pullable://bitnami/kubewatch@sha256:11b7ae4e0a4ac88aaf95411d9778295ba863cf86773c606c0cacfc853960ea7b
lastState: {}
name: kubewatch
ready: true
restartCount: 0
state:
running:
startedAt: 2018-12-16T19:49:43Z
image: bitnami/kubectl:latest
imageID: docker-pullable://bitnami/kubectl@sha256:a54bee5a861442e591e08a8a37b28b0f152955785c07ce4e400cb57795ffa30f
lastState: {}
name: proxy
ready: true
restartCount: 0
state:
running:
startedAt: 2018-12-16T19:49:45Z
hostIP: 10.10.51.216
phase: Running
podIP: 172.20.0.97
qosClass: BestEffort
startTime: 2018-12-16T19:49:41Z
[root@kube-acitest-3 ~]# kubectl get serviceaccount kubewatch -n monitoring
NAME SECRETS AGE
kubewatch 1 5h
[root@kube-acitest-3 ~]# kubectl get clusterrole kubewatch -n monitoring
NAME AGE
kubewatch 47m
[root@kube-acitest-3 ~]# kubectl get clusterrolebinding kubewatch -n monitoring
NAME AGE
kubewatch 47m
[root@kube-acitest-3 ~]# kubectl logs -f kubewatch kubewatch -n monitoring | more
==> Writing config file...
time="2018-12-16T19:49:44Z" level=info msg="Starting kubewatch controller" pkg=kubewatch-pod
time="2018-12-16T19:49:44Z" level=info msg="Starting kubewatch controller" pkg=kubewatch-service
time="2018-12-16T19:49:44Z" level=info msg="Starting kubewatch controller" pkg=kubewatch-deployment
time="2018-12-16T19:49:44Z" level=info msg="Starting kubewatch controller" pkg=kubewatch-namespace
ERROR: logging before flag.Parse: E1216 19:50:14.042285 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E1216 19:50:14.042418 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E1216 19:50:14.042516 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1beta1.Deployment: Get https://10.96.0.1:443/apis/apps/v1beta1/deployments?limit=500&resourceVersion=0: dial tcp 10.96.
0.1:443: i/o timeout
ERROR: logging before flag.Parse: E1216 19:50:14.042591 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeo
ut
ERROR: logging before flag.Parse: E1216 19:50:45.042997 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E1216 19:50:45.046896 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E1216 19:50:45.048370 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1beta1.Deployment: Get https://10.96.0.1:443/apis/apps/v1beta1/deployments?limit=500&resourceVersion=0: dial tcp 10.96.
0.1:443: i/o timeout
ERROR: logging before flag.Parse: E1216 19:50:45.050598 1 reflector.go:205] github.com/bitnami-labs/kubewatch/pkg/controller/controller.g
o:377: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeo
ut
The text was updated successfully, but these errors were encountered: