Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: query_range interval parameter broken when query contains filter expressions #5613

Closed
jfolz opened this issue Mar 13, 2022 · 1 comment · Fixed by #5622
Closed

Bug: query_range interval parameter broken when query contains filter expressions #5613

jfolz opened this issue Mar 13, 2022 · 1 comment · Fixed by #5622
Assignees
Labels
type/bug Somehing is not working as expected

Comments

@jfolz
Copy link
Contributor

jfolz commented Mar 13, 2022

Describe the bug
If the LogQL query contains a filter expression (|=, |~, !=, ...) calls to the query_range API return the same result, regardless of whether the interval parameter set or not and what value it has. E.g. no interval, interval=60s, interval=60000s.

To Reproduce
Steps to reproduce the behavior:

  1. Call query_range API with a high volume query (several lines per second) and interval=60s
  2. Add filter expression that matches every line |~ ".*" and repeat.
  3. Observe that returned log lines are not equal

Expected behavior
Interval should work as documented (i.e., return approximately one line per interval), regardless of the query.

Environment:

  • Infrastructure: bare-metal
  • Deployment tool: systemd service
$ loki-linux-amd64 -version
loki, version 2.4.2 (branch: HEAD, revision: 525040a32)
  build user:       root@5d9e7a4c92e6
  build date:       2022-01-12T16:48:53Z
  go version:       go1.16.2
  platform:         linux/amd64

Screenshots, Promtail config, or terminal output
As seen here in these examples, without the filter expression the difference in timestamps is ~74 seconds. Once the filter is added to the query the returned lines are only 4000ns apart. Removing the interval parameter with the filter in place yields the same result.

$ curl -s -G -H 'X-Scope-OrgID: <tenant>' \
 --data-urlencode 'start=1646843100000000000' \
 --data-urlencode 'end=1646903100000000000' \
 --data-urlencode 'interval=60s' \
 --data-urlencode 'limit=5000' \
 --data-urlencode 'query=<query>' \
 'http://127.0.0.1:3117/loki/api/v1/query_range' | jq '.data.result[0].values [0,1]'
[
  "1646903089664581496",
  "<result 0>"
]
[
  "1646903015429412821",
  "<result 1>"
]

$ curl -s -G -H 'X-Scope-OrgID: <tenant>' \
 --data-urlencode 'start=1646843100000000000' \
 --data-urlencode 'end=1646903100000000000' \
 --data-urlencode 'interval=60s' \
 --data-urlencode 'limit=5000' \
 --data-urlencode 'query=<query> |~ ".*"' \
 'http://127.0.0.1:3117/loki/api/v1/query_range' | jq '.data.result[0].values [0,1]'
[
  "1646903089664581496",
  "<result 0>"
]
[
  "1646903089664577339",
  "<result 1>"
]

$ curl -s -G -H 'X-Scope-OrgID: tnguyen' \
> --data-urlencode 'start=1646843100000000000' \
> --data-urlencode 'end=1646903100000000000' \
> --data-urlencode 'limit=5000' \
> --data-urlencode 'query=<query> |~ ".*"' \
> 'http://127.0.0.1:3117/loki/api/v1/query_range' | jq '.data.result[0].values [0,1]'
[
  "1646903089664581496",
  "<result 0>"
]
[
  "1646903089664577339",
  "<result 1>"
]
@chaudum
Copy link
Contributor

chaudum commented Mar 14, 2022

Hey @jfolz Thanks for reporting the bug. I am able to reproduce the issue and working on a fix.

@chaudum chaudum self-assigned this Mar 14, 2022
@slim-bean slim-bean added backport release-2.5.x Tag a PR with this label to create a PR which cherry pics it into the release-2.5.x branch and removed backport release-2.5.x Tag a PR with this label to create a PR which cherry pics it into the release-2.5.x branch labels Apr 7, 2022
@chaudum chaudum added the type/bug Somehing is not working as expected label Jun 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug Somehing is not working as expected
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants