Skip to content

Commit

Permalink
Merge pull request #461 from garo/feature/improve-documentation
Browse files Browse the repository at this point in the history
Improve documentation based on what I learned when I did loki setup.
  • Loading branch information
davkal authored Apr 9, 2019
2 parents 6b55588 + 55b5be3 commit 894357b
Show file tree
Hide file tree
Showing 4 changed files with 101 additions and 5 deletions.
10 changes: 7 additions & 3 deletions docs/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,15 @@ The Loki server has the following API endpoints (_Note:_ Authentication is out o

- `query`: a logQL query
- `limit`: max number of entries to return
- `start`: the start time for the query, as a nanosecond Unix epoch (nanoseconds since 1970)
- `end`: the end time for the query, as a nanosecond Unix epoch (nanoseconds since 1970)
- `direction`: `forward` or `backward`, useful when specifying a limit
- `start`: the start time for the query, as a nanosecond Unix epoch (nanoseconds since 1970). Default is always one hour ago.
- `end`: the end time for the query, as a nanosecond Unix epoch (nanoseconds since 1970). Default is current time.
- `direction`: `forward` or `backward`, useful when specifying a limit. Default is backward.
- `regexp`: a regex to filter the returned results, will eventually be rolled into the query language

Loki needs to query the index store in order to find log streams for particular labels and the store is spread out by time,
so you need to specify the start and end labels accordingly. Querying a long time into the history will cause additional
load to the index server and make the query slower.

Responses looks like this:

```
Expand Down
18 changes: 17 additions & 1 deletion docs/operations.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,18 @@ storage_config:
dynamodb: dynamodb://access_key:secret_access_key@region
```
You can also use an EC2 instance role instead of hard coding credentials like in the above example.
If you wish to do this the storage_config example looks like this:
```yaml
storage_config:
aws:
s3: s3://region/bucket_name
dynamodbconfig:
dynamodb: dynamodb://region
```
#### S3
Loki is using S3 as object storage. It stores log within directories based on
Expand All @@ -138,6 +150,10 @@ You can setup DynamoDB by yourself, or have `table-manager` setup for you.
You can find out more info about table manager at
[Cortex project](https://github.com/cortexproject/cortex)(https://github.com/cortexproject/cortex).
There is an example table manager deployment inside the ksonnet deployment method. You can find it [here](../production/ksonnet/loki/table-manager.libsonnet)
The table-manager allows deleting old indices by rotating a number of different dynamodb tables and deleting the oldest one. If you choose to
create the table manually you cannot easily erase old data and your index just grows indefinitely.

If you set your DynamoDB table manually, ensure you set the primary index key to `h`
(string) and use `r` (binary) as the sort key. Make sure adjust your throughput base on your usage.
(string) and use `r` (binary) as the sort key. Also set the "period" attribute in the yaml to zero.
Make sure adjust your throughput base on your usage.

74 changes: 74 additions & 0 deletions docs/promtail.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
## Promtail and scrape_configs

Promtail is an agent which reads log files and sends streams of log data to
the centralised Loki instances along with a set of labels. For example if you are running Promtail in Kubernetes
then each container in a single pod will usually yield a single log stream with a set of labels
based on that particular pod Kubernetes labels. You can also run Promtail outside Kubernetes, but you would
then need to customise the scrape_configs for your particular use case.

The way how Promtail finds out the log locations and extracts the set of labels is by using the *scrape_configs*
section in the Promtail yaml configuration. The syntax is the same what Prometheus uses.

The scrape_configs contains one or more *entries* which are all executed for each container in each new pod running
in the instance. If more than one entry matches your logs you will get duplicates as the logs are sent in more than
one stream, likely with a slightly different labels. Everything is based on different labels.
The term "label" here is used in more than one different way and they can be easily confused.

* Labels starting with __ (two underscores) are internal labels. They are not stored to the loki index and are
invisible after Promtail. They "magically" appear from different sources.
* Labels starting with \_\_meta_kubernetes_pod_label_* are "meta labels" which are generated based on your kubernetes
pod labels. Example: If your kubernetes pod has a label "name" set to "foobar" then the scrape_configs section
will have a label \_\_meta_kubernetes_pod_label_name with value set to "foobar".
* There are other \_\_meta_kubernetes_* labels based on the Kubernetes metadadata, such as the namespace the pod is
running (\_\_meta_kubernetes_namespace) or the name of the container inside the pod (\_\_meta_kubernetes_pod_container_name)
* The label \_\_path\_\_ is a special label which Promtail will read to find out where the log files are to be read in.

The most important part of each entry is the *relabel_configs* which are a list of operations which creates,
renames, modifies or alters labels. A single scrape_config can also reject logs by doing an "action: drop" if
a label value matches a specified regex, which means that this particular scrape_config will not forward logs
from a particular log source, but another scrape_config might.

Many of the scrape_configs read labels from \_\_meta_kubernetes_* meta-labels, assign them to intermediate labels
such as \_\_service\_\_ based on a few different logic, possibly drop the processing if the \_\_service\_\_ was empty
and finally set visible labels (such as "job") based on the \_\_service\_\_ label.

In general, all of the default Promtail scrape_configs do the following:
* They read pod logs from under /var/log/pods/$1/*.log.
* They set "namespace" label directly from the \_\_meta_kubernetes_namespace.
* They expect to see your pod name in the "name" label
* They set a "job" label which is roughly "your namespace/your job name"

### Idioms and examples on different relabel_configs:

* Drop the processing if a label is empty:
```yaml
- action: drop
regex: ^$
source_labels:
- __service__
```
* Drop the processing if any of these labels contains a value:
```yaml
- action: drop
regex: .+
separator: ''
source_labels:
- __meta_kubernetes_pod_label_name
- __meta_kubernetes_pod_label_app
```
* Rename a metadata label into anothe so that it will be visible in the final log stream:
```yaml
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
```
* Convert all of the Kubernetes pod labels into visible labels:
```yaml
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
```
Additional reading:
* https://www.slideshare.net/roidelapluie/taking-advantage-of-prometheus-relabeling-109483749
4 changes: 3 additions & 1 deletion docs/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,14 @@ This can have several reasons:
- Restarting promtail will not necessarily resend log messages that have been read. To force sending all messages again, delete the positions file (default location `/tmp/positions.yaml`) or make sure new log messages are written after both promtail and Loki have started.
- Promtail is ignoring targets because of a configuration rule
- Detect this by turning on debug logging and then look for `dropping target, no labels` or `ignoring target` messages.
- Promtail cannot find the location of your log files. Check that the scrape_configs contains valid path setting for finding the logs in your worker nodes.
- Your pods are running but not with the labels Promtail is expecting. Check the Promtail scape_configs.

## Debug output

Both binaries support a log level parameter on the command-line, e.g.: `loki —log.level= debug ...`

## No labels:
## No labels:

## Failed to create target, "ioutil.ReadDir: readdirent: not a directory"

Expand Down

0 comments on commit 894357b

Please sign in to comment.