Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rate limit exceeded #5714

Closed
LinTechSo opened this issue Mar 27, 2022 · 2 comments
Closed

rate limit exceeded #5714

LinTechSo opened this issue Mar 27, 2022 · 2 comments

Comments

@LinTechSo
Copy link
Contributor

Describe the bug

Got rate limit error from my log shipper during log transformation to Loki endpoint.

vector_core::stream::driver: Service call failed. error=ServerError { code: 429 } request_id=5020

To Reproduce

Loki 2.4.2
s3 minio backend storage

Expected behavior

I'd like to understand what this error exactly means.

And also how to avoid it. :)

Environment:

kubernetes 1.23
using loki helm chart

Loki limits config

  auth_enabled: true
  ingester:
    chunk_idle_period: 3m
    chunk_block_size: 262144
    chunk_retain_period: 1m
    max_transfer_retries: 0
    wal:
      dir: /data/loki/wal
    lifecycler:
      ring:
        kvstore:
          store: inmemory
        replication_factor: 1
  limits_config:
    enforce_metric_name: false
    reject_old_samples: true
    reject_old_samples_max_age: 168h
    retention_period: 24h
  schema_config:
    configs:
    - from: 2020-10-24
      store: boltdb-shipper
      object_store: s3
      schema: v11
      index:
        prefix: index_
        period: 24h
  server:
    http_listen_port: 3100
  storage_config:
    boltdb_shipper:
      active_index_directory: /data/loki/boltdb-shipper-active
      cache_location: /data/loki/boltdb-shipper-cache
      cache_ttl: 24h
      shared_store: s3
    aws:
      bucketnames: loki
      endpoint: loki-minio.default.svc:9000
      access_key_id: accesskeyid
      secret_access_key: secretaccessid
      s3forcepathstyle: true
      insecure: true
  chunk_store_config:
    max_look_back_period: 0s
  table_manager:
    retention_deletes_enabled: false
  compactor:
    retention_delete_delay: 2h
    retention_delete_worker_count: 150
@LinTechSo
Copy link
Contributor Author

After searching about this issue.
i found and added this configs but nothing changed.

    query_range:
      parallelise_shardable_queries: false
    frontend:
      max_outstanding_per_tenant: 10240

@dannykopping
Copy link
Contributor

Thank you for your question / support request.
We try to keep GitHub issues strictly for bug reports and feature requests.

You may submit questions and support requests in any of the following ways:

I'm closing this issue, but please feel free to reach out in any of the channels listed above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants