-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metrics: Counter expiring too soon #2333
Comments
I think this is mostly how explore will show metrics, Have you tried using this query into a dashboard ? you can even connect zero in the options if you look around. |
I mostly surprised why it is expiring so fast? As you might see, it expires within less than 30 minutes. I actually thought of the counter as a something long living, I don't think that it would eat too much resources. Could it be a tunable at least? |
Sorry I didn't realize this was a counter. |
I realize this is not well documented.
see https://github.com/grafana/loki/blob/master/docs/clients/promtail/stages/metrics.md It applies to all metrics. |
You should try 6h may be here ? depend how variable you stream is ? from what I can see in the labels, even 30d would work. |
Yeah, it helped. Thanks |
I'm using a simple stage which increments counter on specific log lines (WARN, ERROR, INFO, etc). Recently I've noticed that some counters just dissappear after a while. That could be a problem for rare lines. For example ERROR lines are kind of rare, maybe one in several hours, so If I try to look at them in Grafana, I see just a couple of points during day, not even connected in a line.
I'm scraping promtail metrics from prometheus-server. But if I curl the /metrics endpoint of the promtail itself, I also do not see those counters after a while.
Did I miss any expiry option here? I believe it shouldn't expire so fast.
The text was updated successfully, but these errors were encountered: