-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add more Prometheus metrics #3682
Comments
Can someone with rights mark this as 'good first issue'? :) |
I'm interested in RAM - per kernel and total. Not sure if that's already available though? |
There's some light amount of collaboration around RAM on a kernel level in jupyter/jupyter#264, though it's on a spec level (especially since the actual kernel make be several child processes deep). @ivanov would probably enjoy having a collaborator on the Python side -- I'm looking forward to the UI portion of using it. The notebook server can use it as well since it has access to the messages as it transports them from ZeroMQ to WebSocket. |
@dhirschfeld @rgbkrk we could possibly also do it from the default Kernel Manager, since it is just spawning local processes and knows how to collect metrics for them (and their children). This lets other kernel managers report their own metrics as they wish, and works across all kernels without any extra work. It would be complimentary to jupyter/jupyter#264. |
Hi @yuvipanda , If no one is working on this, then I would like to take it up. I would like to start with the very first one for now. But have a few doubts. Do we want the number of kernels running at the time when the API is called / keep on collecting them over the complete period of time since the notebook server was started? Please let me know accordingly. CC @Madhu94 |
Guys, Any update on this? |
Hey @GoelJatin, I'm up to work on this with you.
As I think this will be the most useful metric to add, may I take it? :) Will try to do it from the default Kernel Manager, as mentioned by @yuvipanda |
Hey @manuhortet , sure go ahead. No concerns from my end. :) |
Hey, I am a first-timer looking for tasks to do too. I found this issue pretty interesting. Anything I can help with? |
@manuhortet Have you been able to make any progress on adding that metric? I'm a first-timer as well and would love to help out! |
Hey! Sincerely I've been delaying some OS contributions in order to gain time for personal projects. I'm sorry I delayed you two too doing that! You can take this issue if you want to. In fact, feel free to ask me if you face any problems. Good luck! @konnermacias @LiryChen |
Alright thanks! I see look into the issue and may ask you a few questions to understand the problem! |
@manuhortet Hey, I would be glad to try and help too. |
@Hyaxia of course, you can. Choose some metric you feel relevant from the first comment on this issue and go for it. |
Sorry, I don't think I will continue to work on this due to the limited time I have besides school :( I would like to pick something up in the future once I have more free time! |
Few questions. Second, is anyone still working on the RAM per kernel? Third, what does number 6 mean? Thanks. |
For number 4, the last done thing and timestamp would be the logical thing IMO. Can't really help on the explanation for number 6. Some help here @yuvipanda ? |
@manuhortet I apologize, school has picked up and I was planning on working on it in a week or two when eveyrthing dies down. @Hyaxia feel free to go for it! |
Ok then, I will start working on the ram per kernel metric in a few days. |
Hello.. This issue seems to be open and there has been no status change since October. Can we get the current status on this please ? |
anything i can help with |
So I guess I'm the last one who was working on it. TL;DR - goto jupyter/jupyter_client#407 , you should implement some kind of generic way to expose different kernel statistics. GL. |
Hi can I try this? looking for some beginner-friendly issues |
Hey! not sure if anyone is still looking into this? I would like to work on this. |
Hi @sudo-k-runner - thank you for your interest. In light of the fact that the primary notebook server will eventually be based on the jupyter server project, you will likely find better traction on metrics gathering via the jupyter telemetry project - which the jupyter server plans on utilizing. |
Hi, can I work on this? I'm new so can someone guide me? |
Hi @dhivyasreedhar - please see the previous comment. This repository is currently focused on bug fixes and security issues. |
Now that #3490 has been merged, we should add more prometheus metrics to the notebook server!
Some ideas for metrics to add...
Am sure there's more that I don't know of!
The text was updated successfully, but these errors were encountered: