-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ingestion rate limit exceeded #1923
Comments
The message is describing which request triggered the rate limiter. Likely this is because previous batches sent from promtail consumed the remainder of the ingestion budget. If you aren't seeing these regularly, it's ok - promtail is designed to handle backoffs and continue ingestion. If these messages are common, increase the https://github.com/grafana/loki/tree/master/docs/configuration#limits_config |
I had this problem fairly often during my tests and the initial setup if promtail. Probably because it was indexing all these huge system-logfiles at the same time. Thanks for your help! |
Current Link to |
I added following config :
again logstash shows the same error and stops sending log to loki:
any solution ? |
If I were you, I would bring the ingestion rate and size down to something below the 100. Your current settings allow for 1GB of data per query. That sounds a bit too much... |
Hi, same issue. I had this problem during my storage migration from filesystem to minio s3 in loki 2.4.2 got "vector_core::stream::driver: Service call failed. error=ServerError { code: 429 } request_id=5020 any updates ? |
Hi @LinTechSo, I have never seen this specific message, but From your post, I don't know if you are still reading experiencing these issues. If you do, you might want to introduce a 'holdoff' mechanism in your migration script. Basically, give Loki now and then some time to breathe: wait a couple of seconds before sending the next batch of data. |
thanks, @yakob-aleksandrovich . would you please explain more what should I do? |
Depending on how you feed your data into Loki, this is possible or not. |
When this happened to me, I checked the |
My experimental config, which seemed to help get rid of the 429 code. Might come in handy.
|
promtail inspect logs:
but we have only 6 labels, is [category, filename, namespace, nodename, pod, type]. loki limits_config: limits_config:
enforce_metric_name: false
max_cache_freshness_per_query: 10m
reject_old_samples: true
reject_old_samples_max_age: 168h
split_queries_by_interval: 15m
per_stream_rate_limit: 512M
cardinality_limit: 200000
ingestion_burst_size_mb: 1000
ingestion_rate_mb: 10000
max_entries_limit_per_query: 1000000
max_global_streams_per_user: 10000
max_streams_per_user: 0
max_label_value_length: 20480
max_label_name_length: 10240
max_label_names_per_series: 300 ingester logs:
My total log size less than 150G during 30 days. Why return "429 Maximum active stream limit exceeded, reduce the number of active streams" ? |
Describe the bug
I'm getting the following lines in loki when sending logs from promtail (using static_config to scrape logfiles):
I don't quiet understand this line or is it misleading?
It says "adding 311 lines for a total size of 102169 bytes". But the total size of 102169 bytes is less than the ingestion limit of 8388608 bytes.
Or does it mean that it tries to store 311 * 102169 = 31.774.559 bytes of data, thus exceeding the ingestion rate limit?
To Reproduce
Steps to reproduce the behavior:
Expected behavior
I'd like to understand what this error exactly means.
And also how to avoid it. :)
Environment:
Screenshots, Promtail config, or terminal output
Loki limits config:
Promtail
loki-batch-size
is set to default.The text was updated successfully, but these errors were encountered: