Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Promtail never scrapes journal logs (using README suggestions in official helm chart) #9922

Open
fzyzcjy opened this issue Jul 12, 2023 · 6 comments

Comments

@fzyzcjy
Copy link

fzyzcjy commented Jul 12, 2023

Describe the bug
Hi thanks for Loki, but I fail to use it to get journald logs.

To Reproduce
Steps to reproduce the behavior:

  1. Started Loki (SHA or version): 2.8.2
  2. Started Promtail (SHA or version): 2.8.2
  3. it does not handle journal logs at all

Expected behavior
A clear and concise description of what you expected to happen.

Environment:

  • Infrastructure: [e.g., Kubernetes, bare-metal, laptop] k8s
  • Deployment tool: [e.g., helm, jsonnet] helm

Screenshots, Promtail config, or terminal output

I use the official promtail helm chart, and deliberately disabled the original section and only remain the journalctl section that I added, in order to show it more clearly.

helm values:

config:
  snippets:
    # #10154, https://github.com/grafana/helm-charts/blob/main/charts/promtail/README.md#journald-support
    extraScrapeConfigs: |
      # Add an additional scrape config for syslog
      - job_name: journal
        journal:
          path: /var/log/journal
          max_age: 12h
          labels:
            job: systemd-journal
        relabel_configs:
          - source_labels:
              - __journal__hostname
            target_label: hostname

          # example label values: kubelet.service, containerd.service
          - source_labels:
              - __journal__systemd_unit
            target_label: unit

          # example label values: debug, notice, info, warning, error
          - source_labels:
              - __journal_priority_keyword
            target_label: level

# Mount journal directory and machine-id file into promtail pods
extraVolumes:
  # #10154, https://github.com/grafana/helm-charts/blob/main/charts/promtail/README.md#journald-support
  - name: journal
    hostPath:
      path: /var/log/journal
  - name: machine-id
    hostPath:
      path: /etc/machine-id

extraVolumeMounts:
  # #10154, https://github.com/grafana/helm-charts/blob/main/charts/promtail/README.md#journald-support
  - name: journal
    mountPath: /var/log/journal
    readOnly: true
  - name: machine-id
    mountPath: /etc/machine-id
    readOnly: true

config when cat inside the container (DaemonSet pod):

root@promtail-7mtbh:/# cat /etc/promtail/promtail.yaml
server:
  log_level: info
  http_listen_port: 3101
  

clients:
  - url: http://loki:3100/loki/api/v1/push

positions:
  filename: /run/promtail/positions.yaml

scrape_configs:
  
  # Add an additional scrape config for syslog
  - job_name: journal
    journal:
      path: /var/log/journal
      max_age: 12h
      labels:
        job: systemd-journal
    relabel_configs:
      - source_labels:
          - __journal__hostname
        target_label: hostname
  
      # example label values: kubelet.service, containerd.service
      - source_labels:
          - __journal__systemd_unit
        target_label: unit
  
      # example label values: debug, notice, info, warning, error
      - source_labels:
          - __journal_priority_keyword
        target_label: level
  

limits_config:
  

tracing:
  enabled: false
root@promtail-7mtbh:/# 

promtail logs, when level=debug

level=debug ts=2023-07-12T14:05:32.163270151Z caller=promtail.go:125 msg="Reloading configuration file"
level=info ts=2023-07-12T14:05:32.164971302Z caller=promtail.go:133 msg="Reloading configuration file" md5sum=979bfe7c7e37b5cb2aaee0a6ada862ed
level=info ts=2023-07-12T14:05:32.172243698Z caller=server.go:323 http=[::]:3101 grpc=[::]:9095 msg="server listening on addresses"
level=info ts=2023-07-12T14:05:32.172409725Z caller=main.go:174 msg="Starting Promtail" version="(version=2.8.2, branch=HEAD, revision=9f809eda7)"
level=warn ts=2023-07-12T14:05:32.172441948Z caller=promtail.go:265 msg="enable watchConfig"
level=debug ts=2023-07-12T14:05:41.417964184Z caller=logging.go:76 msg="GET /ready (200) 52.01µs"
level=debug ts=2023-07-12T14:05:41.418061331Z caller=logging.go:76 msg="GET /ready (200) 142.741µs"
level=debug ts=2023-07-12T14:05:42.385697851Z caller=logging.go:76 msg="GET /metrics (200) 1.833019ms"

systemctl version

systemctl --version
systemd 249 (249.11-0ubuntu3.6)
+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
@fzyzcjy fzyzcjy changed the title Promtail never scrapes journal logs Promtail never scrapes journal logs (using README suggestions in official helm chart) Jul 12, 2023
@jseiser
Copy link

jseiser commented Aug 29, 2023

This config works for us on AWS EKS, with the latest promtail chart.

What is interesting is you get nothing in your logs about Journal. So i wasnt even aware it was working until things showed up in Loki.

config:
  clients:
    - url: http://loki-gateway.loki/loki/api/v1/push
  logFormat: json
  snippets:
    extraScrapeConfigs: |
      - job_name: journal
        journal:
          json: false
          max_age: 12h
          path: /var/log/journal
          matches: _SYSTEMD_UNIT=kubelet.service _SYSTEMD_UNIT=containerd.service
          labels:
            job: systemd-journal
        relabel_configs:
          - source_labels: ['__journal__systemd_unit']
            target_label: 'unit'
          - source_labels: ['__journal__hostname']
            target_label: 'hostname'
          - source_labels: ['__journal_priority_keyword']
            target_label: level

extraVolumes:
  - name: journal
    hostPath:
      path: /var/log/journal
  - name: machine-id
    hostPath:
      path: /etc/machine-id

extraVolumeMounts:
  - name: journal
    mountPath: /var/log/journal
    readOnly: true
  - name: machine-id
    mountPath: /etc/machine-id
    readOnly: true

@n9
Copy link

n9 commented Oct 11, 2023

Same issue here:

systemd 252 (252.17-1~deb12u1)
+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified

Promtail logs with debug level:

level=debug ts=2023-10-11T10:58:01.879042803Z caller=promtail.go:125 msg="Reloading configuration file"
level=info ts=2023-10-11T10:58:01.879925531Z caller=promtail.go:133 msg="Reloading configuration file" md5sum=2084a20bac6ea0eced0eb867fd583d64
level=info ts=2023-10-11T10:58:01.909401697Z caller=server.go:323 http=[::]:3101 grpc=[::]:9095 msg="server listening on addresses"
level=info ts=2023-10-11T10:58:01.909671648Z caller=main.go:174 msg="Starting Promtail" version="(version=2.8.4, branch=HEAD, revision=89d282c43)"
level=warn ts=2023-10-11T10:58:01.909693615Z caller=promtail.go:265 msg="enable watchConfig"
level=debug ts=2023-10-11T10:58:02.388970497Z caller=logging.go:76 traceID=18d428143023c727 msg="GET /ready (200) 74.177µs"

@n9
Copy link

n9 commented Oct 11, 2023

@fzyzcjy Have you found a solution?

@fzyzcjy
Copy link
Author

fzyzcjy commented Oct 11, 2023

No, I finally gave up loki...

@weixiaolv
Copy link

grafana-agent does not have access to systemd-journal . try to use the system-unit user run journalctl.

sudo -u promtail journalctl -f

@klzsysy
Copy link

klzsysy commented Apr 25, 2024

I successfully collected it after adding the path. It seems that the default value does not work.

image: promtail:2.9.6

   - job_name: journal
      journal:
          path: /var/log/journal

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants