Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loki data loss #430

Closed
vanhtuan0409 opened this issue Mar 27, 2019 · 4 comments
Closed

Loki data loss #430

vanhtuan0409 opened this issue Mar 27, 2019 · 4 comments

Comments

@vanhtuan0409
Copy link

Describe the bug
After Loki started for around 30m~45m without new logs, the data for labels seem to be lost.

Grafana show Error connecting to datasource: Data source connected, but no labels received. Verify that Loki and Promtail is configured properly.

Query via Loki http api returned empty label list

Send new log to Loki, previous log show up but label only partially restored

To Reproduce
Steps to reproduce the behavior:

  1. Started loki (sha256:3727574aa6d168f38e25baaed88c70c1bdcbcdd907905c7971a1f8c7e3f94dda)
  2. Fluent loki plugin (sha256:6c77ccf94aaf8eb95fa25163da7f93b8eee8dec1a21593d51f742cd169e3daf6)

Expected behavior
Labels are persisted

Environment:

  • Bare metal running docker only

Screenshots, promtail config, or terminal output
docker-compose

version: "3"

services:
  grafana:
    image: grafana/grafana:6.0.2
    container_name: grafana
    restart: always
    volumes:
      - ./grafana/grafana.ini:/etc/grafana/grafana.ini
      - /data/volumes/grafana:/var/lib/grafana
    ports:
      - 3003:3000

  loki:
    image: grafana/loki:latest
    container_name: loki
    restart: always
    ports:
      - 3100:3100
    command: -config.file=/etc/loki/config.yaml
    volumes:
      - ./loki/config.yml:/etc/loki/config.yaml
      - /data/volumes/loki:/data/loki

  fluentd:
    image: grafana/fluent-plugin-loki:master
    container_name: fluentd
    volumes:
      - ./fluentd/fluentd.conf:/fluentd/etc/fluentd.conf
    ports:
      - 24224:24224
    environment:
      FLUENTD_CONF: /fluentd/etc/fluentd.conf
    depends_on:
      - loki

loki config

auth_enabled: false

server:
  http_listen_port: 3100

ingester:
  lifecycler:
    address: 127.0.0.1
    ring:
      store: inmemory
      replication_factor: 1
  chunk_idle_period: 15m

schema_config:
  configs:
  - from: 0
    store: boltdb
    object_store: filesystem
    schema: v9
    index:
      prefix: index_
      period: 168h

storage_config:
  boltdb:
    directory: /data/loki/index

  filesystem:
    directory: /data/loki/chunks

limits_config:
  enforce_metric_name: false

fluentd config

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

<filter docker.*.*>
  @type record_transformer
  <record>
    job dockerlogs
  </record>
</filter>

<match docker.*.*>
  @type loki
  url "http://loki:3100"
  extra_labels {"env":"staging"}
  label_keys "job,container_id,container_name,source"
  drop_single_key true
  flush_interval 2s
  flush_at_shutdown true
  buffer_chunk_limit 1m
</match>
@davkal
Copy link
Contributor

davkal commented Mar 27, 2019

Loki needs a minute to start up. If logs had been sent during that first minute from the fluentd plugin, those logs will not have been stored and therefore cannot be lost. If your apps didnt produce any log output after that, Loki has nothing to receive, and hence will have no labels stored. When your app started sending logs again, labels for those logs started showing up. This is intended behavior.
Can you confirm that newly written logs are being stored correctly?

@vanhtuan0409
Copy link
Author

@davkal It was not the case. I had started Loki, sent logs to Loki successfully (I can view it on Grafana). But when I came back after a while, data was lost and cannot view from Grafana or via Loki HTTP API.

After new logs were sent, the old logs data appear again, only parts of the old labels were lost

@vanhtuan0409
Copy link
Author

Close issue due to duplicate with #271

@candlerb
Copy link
Contributor

Labels vanishing after a period of inactivity may be because of #453, which was recently fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants