-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tracing in etcd server #11166
Comments
I think @YoyinZyc has already started working on this feature. |
Could you provide more details about the request you made? Basically, tracing is enabled for range, put and compact requests. Btw, it works only when you enable |
@YoyinZyc Got it, thanks a lot. :) In my case, etcd is used as k8s backend storage, range, put operations are much more concerned. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Today, etcd server has generic warning if a request is taking too long to be applied. If a range request takes over 100ms to finish, the server generates a log message like this:
W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:5" took too long (1.310910984s) to execute
When this happens, it is usually not easy to track down the exact cause. There are multiple OSS issues opened regarding this, both in etcd and Kubernetes repo. I think the most interesting request type is range, because: A) it is usually frequently served. B) client could ask for a lot of keys in a single request, which often results in a very long time to finish.
When a range request is taking too long, it will be good to know the time spend on each step of the request lifecycle. Example steps could be:
At the same time, we need to make sure:
The text was updated successfully, but these errors were encountered: