-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fluentbit not picking up file #6077
Comments
What is the actual problem here for 1.? I don't really understand the issue from what is written so please can you provide some reproducer of the issue or more details. For 2, tail functions as per |
My business scenario: About 50 files are generated per hour in the tail directory. Data is occasionally lost in the first minute of each hour. [SERVICE] |
fluentd configuration file : <match tp.vip.*>
|
When data is finally written to BigQuery, an error log is reported: |
help me ,Give me some advice!!!!!!!!!!!!!!!!!!!!!!!!!!! |
It looks to me like the issue is with Fluentd sending to BigQuery so you probably want to drill down on that and raise in the Fluentd repository for that plugin where there will be expertise on that. Is there some issue with the Fluent Bit side of things specifically? There is an output plugin already to send to BigQuery from Fluent Bit directly so does that work? https://docs.fluentbit.io/manual/pipeline/outputs/bigquery The tail inputs you have seem to be ok but I can't really comment as you know the specific log files you have. I did note one seems to have a strange path, is that right or did you mean a wildcard?
Also only the server logs have a The server logs have a DB so will record which offset they got up to last and start from there:
Unrelated but you do not have to provide all configuration options, only the ones required or different from the defaults. |
thank you your help !!! |
|
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the |
This issue was closed because it has been stalled for 5 days with no activity. |
1.When I collect data, I use the Flentdbit + Fluentd architecture, and the data is finally written to BigQuery. Fluentbit uses the tail plug-in, and the directory files I collect will generate a large number of log files in the unit of hours every hour, each time when the file is written to BigQuery Wait for an anomaly, feel field dislocation
2.Is there a limit on the length of the event or record?
The data I collected using the Tail plug-in was about 300 bytes long, but the collection failed
[2022/09/21 02:42:05] [ info] [fluent bit] version=1.9.8, commit=97a5e9dcf3, pid=2052
[2022/09/21 02:42:05] [debug] [engine] coroutine stack size: 98302 bytes (96.0K)
[2022/09/21 02:42:05] [ info] [storage] version=1.2.0, type=memory+filesystem, sync=normal, checksum=disabled, max_chunks_up=128
[2022/09/21 02:42:05] [ info] [storage] backlog input plugin: storage_backlog.1
[2022/09/21 02:42:05] [ info] [cmetrics] version=0.3.6
[2022/09/21 02:42:05] [debug] [tail:tail.0] created event channels: read=440 write=584
[2022/09/21 02:42:05] [debug] [input:tail:tail.0] flb_tail_fs_stat_init() initializing stat tail input
[2022/09/21 02:42:05] [debug] [input:tail:tail.0] inode=1125899906846935 with offset=1203 appended as D:\log\log\battle_report.2022091300.log
[2022/09/21 02:42:05] [debug] [input:tail:tail.0] 1 new files found on path 'D:\log\log\battle_report.2022091300.log'
[2022/09/21 02:42:05] [debug] [storage_backlog:storage_backlog.1] created event channels: read=624 write=628
[2022/09/21 02:42:05] [ info] [input:storage_backlog:storage_backlog.1] queue memory limit: 15.3M
[2022/09/21 02:42:05] [debug] [emitter:re_emitted] created event channels: read=632 write=636
[2022/09/21 02:42:05] [debug] [stdout:stdout.0] created event channels: read=644 write=648
[2022/09/21 02:42:05] [ info] [sp] stream processor started
[2022/09/21 02:42:05] [ info] [output:stdout:stdout.0] worker #0 started
[2022/09/21 02:42:05] [debug] [input:tail:tail.0] inode=1125899906846935 file=D:\log\log\battle_report.2022091300.log promote to TAIL_EVENT
[2022/09/21 02:42:05] [debug] [input:tail:tail.0] [static files] processed 0b, done
[2022/09/21 02:42:15] [debug] [input:tail:tail.0] 0 new files found on path 'D:\log\log\battle_report.2022091300.log'
[2022/09/21 02:42:25] [debug] [input:tail:tail.0] 0 new files found on path 'D:\log\log\battle_report.2022091300.log'
[2022/09/21 02:42:35] [debug] [input:tail:tail.0] 0 new files found on path 'D:\log\log\battle_report.2022091300.log'
[2022/09/21 02:42:45] [debug] [input:tail:tail.0] 0 new files found on path 'D:\log\log\battle_report.2022091300.log'
The text was updated successfully, but these errors were encountered: