-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak in 1.5.3? #3722
Comments
can you show the configuration file? |
sure:
|
What does the log file say? Could be related to logstash-plugins/logstash-output-elasticsearch#144 |
I too have this issue. Config:input {
file {
type => "iis"
path => "C:/inetpub/logs/LogFiles/W3SVC2/*.log"
start_position => "beginning"
}
file {
type => "dataservice_debug"
path => "C:/volo/DataService.debug.log.json*"
codec => "json"
start_position => "beginning"
}
file {
type => "kpi_generator"
path => "C:/volo/Kpi.Generator*.log.json*"
codec => "json"
start_position => "beginning"
}
file {
type => "task_runner"
path => "C:/volo/TaskRunner*.log.json*"
codec => "json"
start_position => "beginning"
}
file {
type => "data_cleaner"
path => "C:/volo/DataCleaner*.log.json*"
codec => "json"
start_position => "beginning"
}
file {
type => "loadgroupings_debug"
path => "C:/volo/LoadGroupingsWizard.debug.log.json*"
codec => "json"
start_position => "beginning"
}
file {
type => "loadgroupings_debug"
path => "C:/volo/LoadGroupingsWizard.error.log.json*"
codec => "json"
start_position => "beginning"
}
file {
type => "extractor_logs"
path => "D:/ExtractorOutput/VoloMetrix/**/extractor.json*"
codec => "json"
start_position => "beginning"
}
}
filter {
if [type] =~ "iis" {
#ignore log comments
if [message] =~ "^#" { drop {} }
grok {
match => {"message" => "%{TIMESTAMP_ISO8601:log_timestamp} %{IPORHOST:site} %{WORD:method} %{URIPATH:page} %{NOTSPACE:querystring} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clienthost} %{NOTSPACE:useragent} %{NUMBER:response} %{NUMBER:subresponse} %{NUMBER:scstatus} %{NUMBER:time_taken}"}
tag_on_failure => [parsefail1]
}
if "parsefail1" in [tags] {
grok {
match => {"message" => "%{TIMESTAMP_ISO8601:log_timestamp} %{IPORHOST:site} %{WORD:method} %{URIPATH:page} %{NOTSPACE:querystring} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clienthost} %{NOTSPACE:useragent} %{NOTSPACE:fullurl} %{NUMBER:response} %{NUMBER:subresponse} %{NUMBER:scstatus} %{NUMBER:time_taken}"}
tag_on_failure => [parsefail2]
}
}
date {
match => [ "log_timestamp", "YYYY-MM-dd HH:mm:ss" ]
timezone => "UTC"
}
useragent {
source=> "useragent"
prefix=> "browser"
}
mutate {
remove_field => [ "log_timestamp"]
}
}
if [type] =~ "dataservice_debug" {
grok {
match => {"message" => "%{WORD:Status} report %{WORD:reportName} with params ##\| cadence=%{WORD:cadence} \| dates=%{DATE_US:startdate};%{DATE_US:enddate} \| pid=%{INT:pid} %{NOTSPACE:throwaway} in %{TIME:timetaken}"}
tag_on_failure => [parsefail1]
}
if "parsefail1" in [tags] {
grok {
match => {"message" => "%{WORD:Status} report %{WORD:reportName} with params ##\| cadence=%{WORD:cadence} \| dates=%{DATE_US:startdate};%{DATE_US:enddate} \| pid=%{INT:pid}"}
tag_on_failure => [parsefail2]
}
}
date {
match => ["date", "ISO8601"]
timezone => "UTC"
}
mutate {
remove_field => [ "date"]
remove_field => [ "throwaway"]
}
}
if [type] =~ "kpi_generator" {
date {
match => ["date", "ISO8601"]
timezone => "UTC"
}
mutate {
remove_field => [ "date"]
}
}
if [type] =~ "task_runner" {
date {
match => ["date", "ISO8601"]
timezone => "UTC"
}
mutate {
remove_field => [ "date"]
}
}
if [type] =~ "data_cleaner" {
date {
match => ["date", "ISO8601"]
timezone => "UTC"
}
mutate {
remove_field => [ "date"]
}
}
if [type] =~ "loadgroupings_debug" {
date {
match => ["date", "ISO8601"]
timezone => "UTC"
}
mutate {
remove_field => [ "date"]
}
}
if [type] =~ "extractor_logs" {
date {
match => ["date", "ISO8601"]
timezone => "UTC"
}
mutate {
remove_field => [ "date"]
}
}
mutate { add_field => ["customer", "customer"] }
}
output {
stdout { codec => rubydebug }
elasticsearch {
host => "10.0.1.42"
port => "9200"
protocol => "http"
}
} Note:I was able to repo with 1.5.2 too. I have tried reducing workers and it hasn't helped. JRE version:java version "1.8.0_51" |
This is the complete output I have a 'replay log' also, but it's 500KB in size.
|
I'm also having a problem with logstash 1.5.4 on Windows Server (2008 and 2012). Memory usage of "java.exe" just continues to grow until all available memory is consumed. Any update on this? |
@michaellandi we have found the issue, it's a jruby bug we are waiting for an upstream fix to release a new version of logstash. For more info see this issue jruby/jruby#3446 |
Thanks for the update, I am going to force logstash to restart several times a day in the meantime. |
@michaellandi I have the same issue. |
@gzpbx until the jruby issue is resolved I've taken a few countermeasures. For starters, forcing java to start with lower memory to begin with by setting the LS_MEM_MAX and LS_MEM_MIN variables to 128m and 256m respectively. This stops the process from claiming the 1GB up front, making it take longer to grow the memory. Then twice a day recycling the logstash instances (mine are setup as windows services using NSSM). I simply set up a scheduled task to run the following .bat script:
|
I have 7 servers running Logstash with two of them experiencing the same memory leak issues. I found out that only those two have the wildcard symbol * in the input filepath. Removing the wildcard solved the issue for one of my servers (because it has autorolling for the logs), but the other has daterelated log files which need the wildcard. Try removing the wildcard if it's possible. |
@JaminVP interesting find. All of my servers have wildcards in the input filepath (i'm pulling in IIS logs which are dated in my case, and not autorolling). |
After a small week I noticed that the memory leak is still there even when using absolute filepaths, although it's heavily reduced. I switched back to 1.50rc until this issue is fixed. |
jruby 1.7.23 was released today, and fixes the memory leak using File::Stat. Any idea when this will be incorporated into a logstash build? |
Soon. Bad timing that Logstash 2.1.0 was released today. We can do a minor release with JRuby 1.7.23 soon. |
Awesome, thanks! I have several servers which I can test on. |
I know its only been a couple of days, but is there any view on when this fix might be available? We have a large number of machines seeing this problem and a fix cannot come soon enough. |
We will be releasing a bug fix version 2.1.1 early next week for this. |
in the mean time you could try & validate the fix which is available in the latest 2.1 branch snapshot in https://s3-eu-west-1.amazonaws.com/build-eu.elasticsearch.org/logstash/2.1/nightly/JDK8/logstash-latest-SNAPSHOT.zip - (also .tar.gz if preferred) |
I'm also having the memory leak issue so I tried the latest version from your link and for me it's crashing in file input:
My config:
What's even more interesting, if I remove the grok filter leaving filter section empty then logstash shuts down immediately without showing any error.
Does this mean the fix makes things even worse or am I doing something wrong? |
Hi Colin I tried the beta you suggested. I'm afraid its not quite there yet. When I try to run it on one of our machines here is what I get:-
So, its a non starter at the moment. Would appreciate any thoughts or suggestions. |
Unfortunately I'm not (yet) familiar enough with Logstash to be able to offer much insight beyond trying things and reporting what happens. Apologies. |
Sorry I linked the wrong issue, I meant #3127 (comment) This seems to be a deficiency in symlinking of libcrypt on Ubuntu + Oracle's Java (not OpenJDK) |
Hi, I'm certainly using Oracle Java, but the servers in question are Windows Server 2008 R2. Is there any update on this issue, does anyone know? Will there still be a LS 2.1.1 release with the fix? That snapshot version I downloaded and tried won't even run. We have now had to turn Logstash off on most of our machines so confidence is a little low. Really need a solid LS release soon .. please! Please let me know if there is anything I can do to assist. |
Yes, @kryten68, we will be releasing 2.1.1 later this week. We have to let the changes bake in our continuous integration testing before we release. |
@jsvd has just confirmed a regression in JRuby 1.7.23 which explains the problem you are experiencing kryten68 - the JRuby team will do a new release shortly to fix this and we will in turn release a new version to include the fix. As soon as we have a build ready we will followup here. Thanks for your patience. |
Gentlemen. Thank you so much. Your efforts are very much appreciated. |
Any updates on this? Desperate to get our Logstash up and running again... |
@kryten68 sorry for the delay. We are catching and fixing things for the 2.1.1 release, and it is targeted for early next week. |
I'm still seeing a memory leak in 2.1.2. |
Fixed in jruby/jruby#3446. Please open a new issue if you are seeing this again. |
I seem to have a memory issue with logstash 1.5.3 (also tried 1.5.4 snapshot2). After a couple of hours, the java runtime takes up all available ram and after a period of time it just crashes. Logstash 1.4.4 does not have this problem with the same config.
When I monitor the java runtime, it continuously and steadily takes up more and more ram until it crashes.
This is the output:
What can I do more to help fix this?
The text was updated successfully, but these errors were encountered: