We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hey, I bumped grafana version from 8.1.6 -> 8.3.4 in a PR yesterday. #1013
Since upgrading our loki-stack, we've been getting "too many outstanding requests" from loki in all of our dashboards.
I checked out the grafana logs and it seems like grafana is making an excessive amount of (duplicate?) calls to loki now:
| | 2022-02-03 09:23:01 | level=info ts=2022-02-03T16:23:01.01914115Z caller=metrics.go:92 org_id=fake latency=fast query="sum(count_over_time({namespace=\"<namespace>\", app=\"/<app-name>\"} \|= \"ERROR\"[5m]))" query_type=metric range_type=range length=10s step=1s duration=9.30594ms status=200 limit=1000 returned_lines=0 throughput=0B total_bytes=0B | | 2022-02-03 09:23:01 | level=info ts=2022-02-03T16:23:01.018645616Z caller=metrics.go:92 org_id=fake latency=fast query="sum(count_over_time({namespace=\"<namespace>\", app=\"/<app-name>\\"} \|= \"ERROR\"[5m]))" query_type=metric range_type=range length=10s step=1s duration=8.94039ms status=200 limit=1000 returned_lines=0 throughput=0B total_bytes=0B | | 2022-02-03 09:23:01 | level=info ts=2022-02-03T16:23:01.018259185Z caller=metrics.go:92 org_id=fake latency=fast query="sum(count_over_time({namespace=\"<namespace>\", app=\"/<app-name>\\"} \|= \"ERROR\"[5m]))" query_type=metric range_type=range length=10s step=1s duration=8.785415ms status=200 limit=1000 returned_lines=0 throughput=0B total_bytes=0B | | 2022-02-03 09:23:01 | level=info ts=2022-02-03T16:23:01.015627852Z caller=metrics.go:92 org_id=fake latency=fast query="sum(count_over_time({namespace=\"<namespace>\", app=\"/<app-name>\\"} \|= \"ERROR\"[5m]))" query_type=metric range_type=range length=10s step=1s duration=5.693907ms status=200 limit=1000 returned_lines=0 throughput=0B total_bytes=0B | | 2022-02-03 09:23:01 | level=info ts=2022-02-03T16:23:01.013754236Z caller=metrics.go:92 org_id=fake latency=fast query="sum(count_over_time({namespace=\"<namespace>\", app=\"/<app-name>\\"} \|= \"ERROR\"[5m]))" query_type=metric range_type=range length=10s step=1s duration=3.539043ms status=200 limit=1000 returned_lines=0 throughput=0B total_bytes=0B | | 2022-02-03 09:23:01 | level=info ts=2022-02-03T16:23:01.012836684Z caller=metrics.go:92 org_id=fake latency=fast query="sum(count_over_time({namespace=\"<namespace>\", app=\"/<app-name>\"} \|= \"ERROR\"[5m]))" query_type=metric r
I tried the fixes mentioned in grafana/loki#4613 regarding limits.
I also tried deploying the new loki-stack to a local minikube and I was seeing the same excessive calling to loki there too.
Any ideas?
The text was updated successfully, but these errors were encountered:
I realize this issue played up for you a while ago, but did you ever end up finding a workaround? I'm running into the same issue.
Sorry, something went wrong.
Hey, I don't recall finding a solution. I think we may have moved away from loki-stack chart.
hello everybody, anyone has a fix for this? facing the same issue in July 2024
No branches or pull requests
Hey, I bumped grafana version from 8.1.6 -> 8.3.4 in a PR yesterday. #1013
Since upgrading our loki-stack, we've been getting "too many outstanding requests" from loki in all of our dashboards.
I checked out the grafana logs and it seems like grafana is making an excessive amount of (duplicate?) calls to loki now:
I tried the fixes mentioned in grafana/loki#4613 regarding limits.
I also tried deploying the new loki-stack to a local minikube and I was seeing the same excessive calling to loki there too.
Any ideas?
The text was updated successfully, but these errors were encountered: