-
Notifications
You must be signed in to change notification settings - Fork 332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error during socket read: End of file; 0 bytes read so far #17
Comments
Does the error say which endpoint it is? How frequently are you seeing the errors? These errors usually occur because the remote closed the socket. There should be no loss of data since there are automatic retries. |
Hi and thank you for the quick answer! |
I'm getting the same error using the logstash plugin 'logstash-output-kinesis' with KPL v0.10.1 when sending log data to a kinesis stream |
We've been observing this error in a few logstash-output-kinesis rollouts. As far as I can tell, data is still making it into Kinesis fine. I think this message is just noise. Would be nice to drop it down to DEBUG / INFO level. |
@Gil-Bl it's worth noting that the default KPL record TTL is quite low (30s). So if you're losing data it's not necessarily because of the connection resets, but maybe because the retry after a connection reset is happening too late. |
Thanks Sam! Sam - also can you please explain where can I modify the KPL record TTL, and what are the considerations to modify this value? Any support will be much appreciated. |
That actually sounds like a different problem that you should raise a separate issue for ;) As for the record TTL question, see the documentation here |
Hi. I'm getting this error every second or so when using the Logstash output.
Is this to be expected? |
We are seeing this also - its causing us to drop data. We have a relatively low test throughput of 1 event per second, so it's not to do with Kinesis volume. Oddly we only have this issue when using logstash-output-kinesis with large message sizes (we have a working system with average message size of ~300 characters) the system we are having problems with has an average message size of 3500 characters. I seem to have had some temporary success by disabling aggregation with:
Now I see intermittent:
But logs continue to flow. My guess is that i was having intermittent connection issues that was causing me to build up back pressure, LogStash was then aggregating the back-pressure into even larger messages that didn't stand a chance of being accepted into Kinesis which was causing the pipe to become blocked. I am running testing with this today and will report back if this fixed it for me. |
We added the rate limit field below and this behavior seems to have subsided.
|
Which version of the KPL are you using? |
I'm seeing a similar issue. Data from filebeats to logstash. I write them out in a file, and have a count of 127. Then when it goes through Kinesis, i get 84 records. This is just a subset of the data. I'm probably sending 40k records in total every time I run a spark job. I'm running these as SysV services on AWS Linux: Updated logstash-input-beats 3.1.8 to 3.1.12
No errors in the logs and I'm pushing to 18 kinesis shards. Not sure what to do. |
Hi,
I am running with KPL v0.10.0 and keep getting the following error message:
[error][io_service_socket.h:229] Error during socket read: End of file; 0 bytes read so far.
I am trying to run logstash plug in based on KPL (=logstash-output-kinesis).
Your support is much appreciated.
The text was updated successfully, but these errors were encountered: