Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get Fluint-bit logs are get in the Grafana without install Prometheus? #9084

Closed
Ommkwn2001 opened this issue Jul 15, 2024 · 2 comments

Comments

@Ommkwn2001
Copy link

I install first of all Fluint-bit and i changes on the value.yaml of Fluint-bit.

And I install Grafana and i login in Grafana dashboard successfully and than i first of all add Data Source in Grafana.

The Data Source is "Prometheus".

And than i pass this url "http://fluent-bit.default.svc.cluster.local:2020/api/v1/metrics/prometheus" And i try with this url also "http://fluent-bit.default.svc.cluster.local:2020/api/v1/metrics".

And i choose HTTP method : POST And i click on the Save & Test And i click on "Add new dashboard" and click on "Add Visualization" and select Data Source "Prometheus".

And i try this query "rate(fluentbit_input_record_total{name="systemd."}[1m]) " but there are no any graphs are get

My Fluint-Bit value.yaml is :

`# Default values for fluent-bit.

kind -- DaemonSet or Deployment

kind: DaemonSet

replicaCount -- Only applicable if kind=Deployment

replicaCount: 1

image:
repository: cr.fluentbit.io/fluent/fluent-bit
tag:
digest:
pullPolicy: IfNotPresent

testFramework:
enabled: true
namespace:
image:
repository: busybox
pullPolicy: Always
tag: latest
digest:

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
create: true
annotations: {}
name:

rbac:
create: true
nodeAccess: false
eventsAccess: false

podSecurityPolicy:
create: false
annotations: {}

openShift:
enabled: false
securityContextConstraints:
create: true
name: ""
annotations: {}
existingName: ""

podSecurityContext: {}

hostNetwork: false
dnsPolicy: ClusterFirst

dnsConfig: {}

hostAliases: []

securityContext: {}

service:
type: ClusterIP
port: 2020
internalTrafficPolicy:
loadBalancerClass:
loadBalancerSourceRanges: []
labels: {}
annotations: {}

serviceMonitor:
enabled: false

prometheusRule:
enabled: false

dashboards:
enabled: false
labelKey: grafana_dashboard
labelValue: 1
annotations: {}
namespace: ""

lifecycle: {}

livenessProbe:
httpGet:
path: /
port: http

readinessProbe:
httpGet:
path: /api/v1/health
port: http

resources: {}

ingress:
enabled: false
ingressClassName: ""
annotations: {}
hosts: []
extraHosts: []
tls: []

autoscaling:
vpa:
enabled: false
annotations: {}
controlledResources: []
maxAllowed: {}
minAllowed: {}
updatePolicy:
updateMode: Auto
enabled: false
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 75
behavior: {}

podDisruptionBudget:
enabled: false
annotations: {}
maxUnavailable: "30%"

nodeSelector: {}

tolerations: []

affinity: {}

labels: {}

annotations: {}

podAnnotations: {}

podLabels: {}

minReadySeconds:

terminationGracePeriodSeconds:

priorityClassName: ""

env: []

envWithTpl: []

envFrom: []

extraContainers: []

flush: 1

metricsPort: 2020

extraPorts: []

extraVolumes: []

extraVolumeMounts: []

updateStrategy: {}

existingConfigMap: ""

networkPolicy:
enabled: false

luaScripts: {}

config:
service: |
[SERVICE]
Daemon Off
Flush 1
Log_Level info
Parsers_File /fluent-bit/etc/parsers.conf
Parsers_File /fluent-bit/etc/conf/custom_parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port {{ .Values.metricsPort }}
Health_Check On

inputs: |
[INPUT]
Name tail
Refresh_Interval 2
Path /var/log/containers/.log
Tag kube.custommvc.

Mem_Buf_Limit 50MB
Read_from_Head true
Parser access_log_ltsv

[INPUT]
    Name systemd
    Tag host.*
    Systemd_Filter _SYSTEMD_UNIT=kubelet.service
    Read_From_Tail On 

filters: |
[FILTER]
Name kubernetes
Match kube.custommvc.*
Kube_URL https://kubernetes.default:443
tls.verify Off
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On

[Filter]
    Name         throttle
    Match        kube.custommvc.* 
    Rate         5
    Window       30
    Interval     60     

outputs: |
[OUTPUT]
Name opensearch
Match kube.custommvc.*
Host opensearch-cluster-master-headless.myopensearch.svc.cluster.local
Port 9200
Buffer_Size 15MB
HTTP_User admin
HTTP_Passwd TadhakDev01
Logstash_Format off
Logstash_Prefix custmvc
Trace_Error On
Trace_Output On
Replace_Dots On
Retry_Limit false
Index custommvc
Suppress_Type_Name on
Include_Tag_Key on
tls on
tls.verify off
Generate_ID on
Type _doc

volumeMounts:

  • name: config
    mountPath: /fluent-bit/etc/conf

daemonSetVolumes:

  • name: varlog
    hostPath:
    path: /var/log
  • name: varlibdockercontainers
    hostPath:
    path: /var/lib/docker/containers
  • name: etcmachineid
    hostPath:
    path: /etc/machine-id
    type: File

daemonSetVolumeMounts:

  • name: varlog
    mountPath: /var/log
  • name: varlibdockercontainers
    mountPath: /var/lib/docker/containers
    readOnly: true
  • name: etcmachineid
    mountPath: /etc/machine-id
    readOnly: true

command:

  • /fluent-bit/bin/fluent-bit

args:

  • --workdir=/fluent-bit/etc
  • --config=/fluent-bit/etc/conf/fluent-bit.conf

logLevel: info

hotReload:
enabled: false
image:
repository: ghcr.io/jimmidyson/configmap-reload
tag: v0.11.1
digest:
pullPolicy: IfNotPresent
resources: {}`

I want to get Fluint-bit logs are get in the Grafana with CPU usage and Memory usage graphs without install Prometheus

And when i try this url "http://fluent-bit.default.svc.cluster.local:2020/api/v1/metrics/prometheus" there are get same error like this "ReadObject: expect { or , or } or n, but found #, error found in #1 byte of ...|# HELP flue|..., bigger context ...|# HELP fluentbit_filter_add_record_total Fluentbit |... - There was an error returned querying the Prometheus API."

Copy link
Contributor

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the exempt-stale label.

@github-actions github-actions bot added the Stale label Oct 14, 2024
Copy link
Contributor

This issue was closed because it has been stalled for 5 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant