Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[loki-stack] too many outstanding requests #1016

Open
akromish opened this issue Feb 3, 2022 · 3 comments
Open

[loki-stack] too many outstanding requests #1016

akromish opened this issue Feb 3, 2022 · 3 comments

Comments

@akromish
Copy link
Contributor

akromish commented Feb 3, 2022

Hey, I bumped grafana version from 8.1.6 -> 8.3.4 in a PR yesterday. #1013

Since upgrading our loki-stack, we've been getting "too many outstanding requests" from loki in all of our dashboards.

I checked out the grafana logs and it seems like grafana is making an excessive amount of (duplicate?) calls to loki now:

  |   | 2022-02-03 09:23:01 | level=info ts=2022-02-03T16:23:01.01914115Z caller=metrics.go:92 org_id=fake latency=fast query="sum(count_over_time({namespace=\"<namespace>\", app=\"/<app-name>\"} \|= \"ERROR\"[5m]))" query_type=metric range_type=range length=10s step=1s duration=9.30594ms status=200 limit=1000 returned_lines=0 throughput=0B total_bytes=0B
  |   | 2022-02-03 09:23:01 | level=info ts=2022-02-03T16:23:01.018645616Z caller=metrics.go:92 org_id=fake latency=fast query="sum(count_over_time({namespace=\"<namespace>\", app=\"/<app-name>\\"} \|= \"ERROR\"[5m]))" query_type=metric range_type=range length=10s step=1s duration=8.94039ms status=200 limit=1000 returned_lines=0 throughput=0B total_bytes=0B
  |   | 2022-02-03 09:23:01 | level=info ts=2022-02-03T16:23:01.018259185Z caller=metrics.go:92 org_id=fake latency=fast query="sum(count_over_time({namespace=\"<namespace>\", app=\"/<app-name>\\"} \|= \"ERROR\"[5m]))" query_type=metric range_type=range length=10s step=1s duration=8.785415ms status=200 limit=1000 returned_lines=0 throughput=0B total_bytes=0B
  |   | 2022-02-03 09:23:01 | level=info ts=2022-02-03T16:23:01.015627852Z caller=metrics.go:92 org_id=fake latency=fast query="sum(count_over_time({namespace=\"<namespace>\", app=\"/<app-name>\\"} \|= \"ERROR\"[5m]))" query_type=metric range_type=range length=10s step=1s duration=5.693907ms status=200 limit=1000 returned_lines=0 throughput=0B total_bytes=0B
  |   | 2022-02-03 09:23:01 | level=info ts=2022-02-03T16:23:01.013754236Z caller=metrics.go:92 org_id=fake latency=fast query="sum(count_over_time({namespace=\"<namespace>\", app=\"/<app-name>\\"} \|= \"ERROR\"[5m]))" query_type=metric range_type=range length=10s step=1s duration=3.539043ms status=200 limit=1000 returned_lines=0 throughput=0B total_bytes=0B
  |   | 2022-02-03 09:23:01 | level=info ts=2022-02-03T16:23:01.012836684Z caller=metrics.go:92 org_id=fake latency=fast query="sum(count_over_time({namespace=\"<namespace>\", app=\"/<app-name>\"} \|= \"ERROR\"[5m]))" query_type=metric r

I tried the fixes mentioned in grafana/loki#4613 regarding limits.

I also tried deploying the new loki-stack to a local minikube and I was seeing the same excessive calling to loki there too.

Any ideas?

@Sleepy-GH
Copy link

I realize this issue played up for you a while ago, but did you ever end up finding a workaround? I'm running into the same issue.

@akromish
Copy link
Contributor Author

Hey, I don't recall finding a solution. I think we may have moved away from loki-stack chart.

@diegocejasprieto
Copy link

hello everybody, anyone has a fix for this? facing the same issue in July 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants