diff --git a/docs/api.md b/docs/api.md index 16159625dbed4..9980878ffec59 100644 --- a/docs/api.md +++ b/docs/api.md @@ -29,8 +29,8 @@ The Loki server has the following API endpoints (_Note:_ Authentication is out o - `query`: a logQL query - `limit`: max number of entries to return - `start`: the start time for the query, as a nanosecond Unix epoch (nanoseconds since 1970). Default is always one hour ago. - - `end`: the end time for the query, as a nanosecond Unix epoch (nanoseconds since 1970). - - `direction`: `forward` or `backward`, useful when specifying a limit + - `end`: the end time for the query, as a nanosecond Unix epoch (nanoseconds since 1970). Default is current time. + - `direction`: `forward` or `backward`, useful when specifying a limit. Default is backward. - `regexp`: a regex to filter the returned results, will eventually be rolled into the query language Loki needs to query the index store in order to find log streams for particular labels and the store is spread out by time, diff --git a/docs/operations.md b/docs/operations.md index 5c5bb4bb26078..79e3651f07893 100644 --- a/docs/operations.md +++ b/docs/operations.md @@ -154,6 +154,6 @@ The table-manager allows deleting old indices by rotating a number of different create the table manually you cannot easily erase old data and your index just grows indefinitely. If you set your DynamoDB table manually, ensure you set the primary index key to `h` -(string) and use `r` (binary) as the sort key. Also set the "perior" attribute in the yaml to zero. +(string) and use `r` (binary) as the sort key. Also set the "period" attribute in the yaml to zero. Make sure adjust your throughput base on your usage. diff --git a/docs/promtail.md b/docs/promtail.md index b8be25cec75af..5dd07737e9aec 100644 --- a/docs/promtail.md +++ b/docs/promtail.md @@ -1,6 +1,6 @@ ## Promtail and scrape_configs -Promtail is an agent which reads the Kubernets pod log files and sends streams of log data to +Promtail is an agent which reads the Kubernetes pod log files and sends streams of log data to the centralised Loki instances along with a set of labels. Each container in a single pod will usually yield a single log stream with a set of labels based on that particular pod Kubernetes labels.