From a399d21faa6bab1f6f5b0fa5cd960291ae3cebd5 Mon Sep 17 00:00:00 2001 From: Fae Charlton Date: Thu, 5 Nov 2020 14:49:24 -0500 Subject: [PATCH] [libbeat] Document the Kafka output backoff settings --- libbeat/outputs/kafka/docs/kafka.asciidoc | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/libbeat/outputs/kafka/docs/kafka.asciidoc b/libbeat/outputs/kafka/docs/kafka.asciidoc index 2483f6641b7c..e1dcb77b6bb7 100644 --- a/libbeat/outputs/kafka/docs/kafka.asciidoc +++ b/libbeat/outputs/kafka/docs/kafka.asciidoc @@ -232,6 +232,19 @@ Set `max_retries` to a value less than 0 to retry until all events are published The default is 3. endif::[] +===== `backoff.init` + +The number of seconds to wait before trying to republish to Kafka +after a network error. After waiting `backoff.init` seconds, {beatname_uc} +tries to republish. If the attempt fails, the backoff timer is increased +exponentially up to `backoff.max`. After a successful publish, the backoff +timer is reset. The default is 1s. + +===== `backoff.max` + +The maximum number of seconds to wait before attempting to republish to +Kafka after a network error. The default is 60s. + ===== `bulk_max_size` The maximum number of events to bulk in a single Kafka request. The default is 2048.