-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Buffer - Emit transaction failed: error_class=IOError error="closed stream" #3056
Comments
I think supporting in_tail's read_lines_limit feature in
This error should not happen. Will check it. |
rate_limit feature seems be same as in_tail's read_lines_limit feature.... There is something what I'm overlooking things...?
|
I see this same issue for logs kubernetes logs forwarded from fluent-bit. Initially the IOError is thrown repeatedly for one chunk that cannot be delivered, and eventually delivery simply stops for all messages. When running multiple threads, some threads keep working, but eventually also show the same error, until all threads are blocked. |
@repeatedly We see this warn also in the logs. Did you find anything related to it? |
#3089 seems same issue with this. |
Describe the bug
When getting larger event streams of data from the plugin
Windows_eventlog2
it eventually crash.When having a limited rate of logs to 200, it is going fine, but we are getting more logs than that incoming so we have to have no limit or much higher. But when having a limit that are full of bytes up to 20x more than the chunk_limit, it crashes.
To Reproduce
Have a high stream of data from windows eventlog2 to kafka with a buffer
Expected behavior
Split everything up in 1MB chunks and flush them to kafka
Your Environment
fluentd 1.10.2
Windows Server 2019
Your Configuration
I've tried changed the rate_limit and chunk_limit_size
Your Error Log
Additional context
When saving all the logs to files of 1MB, its going all fine no crashes.
The text was updated successfully, but these errors were encountered: