-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Keep the watch action working all the time #124
Comments
Do you get any error? did you try to set timeout_seconds parameter? |
I see this happening with one certain cluster, but it works fine with another. So this might be some timeout on somthing in between, like load-balancer or so. But it would be cool if the |
+1. Is there a way to store a watch "version number" every time I receive an event that can be used whenever I resume the stream, so that I only get events subsequent to that point? [0] I can tell looking at the list response "closed" attribute. |
look at |
Thanks for the answer.
The only resourceVersion I see here is the one concerning the specific k8s object. Should I set that value as the resource_version argument? |
every time you get an event, store event['object'].metadata.resource_version into a variable (let say |
I believe the root cause of this issue is the default timeout setting in kube-apiserver:
While the timeout is:
|
@lichen2013 nice catch but regardless of this I think we should get your reconnect PR in. @caesarxuchao, you looked at timeout issue before, those this mean API server times out for watch calls and our shared informer reconnect? cc @roycaihw |
Any update on this? I have this code to watch events: import json
import os
from kubernetes import client, config, watch
if 'KUBERNETES_PORT' in os.environ:
config.load_incluster_config()
else:
config.load_kube_config()
v1 = client.CoreV1Api()
w = watch.Watch()
for event in w.stream(v1.list_event_for_all_namespaces, _request_timeout=60):
print(json.dumps(event['raw_object'])) It runs but if there is no events for some extended amount of time it then dies with this:
Am I missing something? |
Ah I think maybe my problem is the |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I got the same issue. is it fixed or any workaround? |
Edited by @mbohlool:
This question lead to an action item to add a retry mechanism to watch class. It should be controlled by a flag and will result in keeping the watch open all the time.
Original post:
below is how i use the client-python in list.py:
config.load_kube_config()
v1 = client.CoreV1Api()
w = watch.Watch()
for event in w.stream(v1.list_persistent_volume_claim_for_all_namespaces):
print("Event: %s %s" % (event['type'], event['object'].metadata.name))
when i run the script with command "python list.py", it will show the event normally,
however i will exited automatic in several minitues.
does anybody konws how could i keep this watch action working all the time?
The text was updated successfully, but these errors were encountered: