-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
logstash stalled/blocked - 1.5.0rc2 #2846
Comments
@seventy-7 Are you running the latest version of the |
@ph yes latest stable. 0.4.0 |
Yes this seems related to #2130, usually when this kind of symptom happen is because something block one the output and the back pressure is applied up to the input causing them to block when the queue is full. The internal queue size is 20 items.
|
|
I'm seeing the exact same problem. The interesting part is that it only started 3 days ago and prior to that I'd been running the same setup for over a week without any issues! Now I can't keep the logstash indexers running :( The setup is similar to @seventy-7 's: 1.5.0RC2, latest logstash forwarder, redis for queueing between the receiver and indexer logstashes. |
@MarkGavalda Would you mind pasting your configuration? |
Similar to #2894 |
Sorry for not updating this earlier, we made many many changes to our whole ES stack since then and the problem went away, however I cannot pinpoint which change was the solution for this issue. |
@MarkGavalda thank your for the update, but just a curiosity are you still running rc2? |
@seventy-7 Could you also share your configuration? |
Hello devs,
Have been facing a persistent issue. I've been seeing logstash receivers stalling regularly (every 15min). The process will stop receiving input and all threads look blocked. The process never recovers. logstash inputs become in a closed wait status. and the process is rendered useless. From my understanding of the event pipeline, if the output may become busy, the pipeline can get blocked but will recover. In this case it doesnt recover.
Setup details:
20 logstash forwarders -> 5 logstash receivers (performing multiline) -> redis -> 5 logstash indexers (performing filters) -> elasticsearch & redis
Often elasticsearch is indexing around 4-9k/sec total
A thread dump from one of the blocked processes can be found here (wasn't sure how to attach a txt file) . www.outtalimits.com.au/jstack.out
After alot of reading, this issue looks potentially related.
#2130
Let me know if you require any further debugging info.
The text was updated successfully, but these errors were encountered: