Skip to content

Commit

Permalink
Cherry-pick #9603 to 6.x: Fixes parsing of GC entries in elasticsearc…
Browse files Browse the repository at this point in the history
…h server log (#9810)

* Regenerating fields

* Fixing up for 6.x

* Adding CHANGELOG entries
  • Loading branch information
ycombinator authored Jan 8, 2019
1 parent 9111e38 commit ec704cd
Show file tree
Hide file tree
Showing 6 changed files with 75 additions and 12 deletions.
2 changes: 2 additions & 0 deletions CHANGELOG.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ https://github.com/elastic/beats/compare/v6.6.0...6.x[Check the HEAD diff]
*Filebeat*

- Allow beats to blacklist certain part of the configuration while using Central Management. {pull}9099[9099]
- Fix parsing of GC entries in elasticsearch server log. {issue}9513[9513] {pull}9810[9810]

*Heartbeat*

Expand Down Expand Up @@ -70,6 +71,7 @@ https://github.com/elastic/beats/compare/v6.6.0...6.x[Check the HEAD diff]
- Fix saved objects in filebeat haproxy dashboard. {pull}9417[9417]
- Fixed a memory leak when harvesters are closed. {pull}7820[7820]
- Add `convert_timezone` option to Logstash module to convert dates to UTC. {issue}9756[9756] {pull}9797[9797]
- Fix parsing of GC entries in elasticsearch server log. {issue}9513[9513] {pull}9810[9810]

*Heartbeat*

Expand Down
26 changes: 24 additions & 2 deletions filebeat/docs/fields.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -1416,14 +1416,36 @@ example:
--
*`elasticsearch.server.gc_overhead`*::
*`elasticsearch.server.gc.overhead_seq`*::
+
--
type: long
example:
example: 3449992
Sequence number
--
*`elasticsearch.server.gc.collection_duration.ms`*::
+
--
type: float
example: 1600
Time spent in GC, in milliseconds
--
*`elasticsearch.server.gc.observation_duration.ms`*::
+
--
type: float
example: 1800
Total time over which collection was observed, in milliseconds
--
Expand Down
2 changes: 1 addition & 1 deletion filebeat/include/fields.go

Large diffs are not rendered by default.

16 changes: 12 additions & 4 deletions filebeat/module/elasticsearch/server/_meta/fields.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,15 @@
description: ""
example: ""
type: long
- name: gc_overhead
description: ""
example: ""
type: long
- name: overhead_seq
description: "Sequence number"
example: 3449992
type: long
- name: collection_duration.ms
description: "Time spent in GC, in milliseconds"
example: 1600
type: float
- name: observation_duration.ms
description: "Total time over which collection was observed, in milliseconds"
example: 1800
type: float
33 changes: 31 additions & 2 deletions filebeat/module/elasticsearch/server/ingest/pipeline.json
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,42 @@
"field": "message",
"pattern_definitions": {
"GREEDYMULTILINE": "(.|\n)*",
"INDEXNAME": "[a-zA-Z0-9_.-]*"
"INDEXNAME": "[a-zA-Z0-9_.-]*",
"GC_ALL": "\\[gc\\]\\[%{NUMBER:elasticsearch.server.gc.overhead_seq}\\] overhead, spent \\[%{NUMBER:elasticsearch.server.gc.collection_duration.time:float}%{DATA:elasticsearch.server.gc.collection_duration.unit}\\] collecting in the last \\[%{NUMBER:elasticsearch.server.gc.observation_duration.time:float}%{DATA:elasticsearch.server.gc.observation_duration.unit}\\]",
"GC_YOUNG": "\\[gc\\]\\[young\\]\\[%{NUMBER:elasticsearch.server.gc.young.one}\\]\\[%{NUMBER:elasticsearch.server.gc.young.two}\\]%{SPACE}%{GREEDYMULTILINE:message}",
"LOG_HEADER": "\\[%{TIMESTAMP_ISO8601:elasticsearch.server.timestamp}\\]\\[%{LOGLEVEL:log.level}%{SPACE}?\\]\\[%{DATA:elasticsearch.server.component}%{SPACE}\\](%{SPACE})?(\\[%{DATA:elasticsearch.node.name}\\])?(%{SPACE})?"
},
"patterns": [
"\\[%{TIMESTAMP_ISO8601:elasticsearch.server.timestamp}\\]\\[%{LOGLEVEL:log.level}%{SPACE}?\\]\\[%{DATA:elasticsearch.server.component}%{SPACE}\\](%{SPACE})?(\\[%{DATA:elasticsearch.node.name}\\])?(%{SPACE})?(\\[gc\\](\\[young\\]\\[%{NUMBER:elasticsearch.server.gc.young.one}\\]\\[%{NUMBER:elasticsearch.server.gc.young.two}\\]|\\[%{NUMBER:elasticsearch.server.gc_overhead}\\]))?%{SPACE}((\\[%{INDEXNAME:elasticsearch.index.name}\\]|\\[%{INDEXNAME:elasticsearch.index.name}\\/%{DATA:elasticsearch.index.id}\\]))?%{SPACE}%{GREEDYMULTILINE:message}"
"%{LOG_HEADER}%{GC_ALL}",
"%{LOG_HEADER}%{GC_YOUNG}",
"%{LOG_HEADER}%{SPACE}((\\[%{INDEXNAME:elasticsearch.index.name}\\]|\\[%{INDEXNAME:elasticsearch.index.name}\\/%{DATA:elasticsearch.index.id}\\]))?%{SPACE}%{GREEDYMULTILINE:message}"
]
}
},
{
"script": {
"lang": "painless",
"source": "if (ctx.elasticsearch.server.gc != null && ctx.elasticsearch.server.gc.observation_duration != null) { if (ctx.elasticsearch.server.gc.observation_duration.unit == params.seconds_unit) { ctx.elasticsearch.server.gc.observation_duration.ms = ctx.elasticsearch.server.gc.observation_duration.time * params.ms_in_one_s;}if (ctx.elasticsearch.server.gc.observation_duration.unit == params.milliseconds_unit) { ctx.elasticsearch.server.gc.observation_duration.ms = ctx.elasticsearch.server.gc.observation_duration.time; } if (ctx.elasticsearch.server.gc.observation_duration.unit == params.minutes_unit) { ctx.elasticsearch.server.gc.observation_duration.ms = ctx.elasticsearch.server.gc.observation_duration.time * params.ms_in_one_m; }} if (ctx.elasticsearch.server.gc != null && ctx.elasticsearch.server.gc.collection_duration != null) { if (ctx.elasticsearch.server.gc.collection_duration.unit == params.seconds_unit) { ctx.elasticsearch.server.gc.collection_duration.ms = ctx.elasticsearch.server.gc.collection_duration.time * params.ms_in_one_s;} if (ctx.elasticsearch.server.gc.collection_duration.unit == params.milliseconds_unit) {ctx.elasticsearch.server.gc.collection_duration.ms = ctx.elasticsearch.server.gc.collection_duration.time; } if (ctx.elasticsearch.server.gc.collection_duration.unit == params.minutes_unit) { ctx.elasticsearch.server.gc.collection_duration.ms = ctx.elasticsearch.server.gc.collection_duration.time * params.ms_in_one_m; }}",
"params": {
"minutes_unit": "m",
"seconds_unit": "s",
"milliseconds_unit": "ms",
"ms_in_one_s": 1000,
"ms_in_one_m": 60000
}
}
},
{
"remove": {
"field": [
"elasticsearch.server.gc.collection_duration.time",
"elasticsearch.server.gc.collection_duration.unit",
"elasticsearch.server.gc.observation_duration.time",
"elasticsearch.server.gc.observation_duration.unit"
],
"ignore_missing": true
}
},
{
"rename": {
"field": "elasticsearch.server.timestamp",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -244,13 +244,15 @@
"@timestamp": "2018-07-03T11:45:45,604",
"elasticsearch.node.name": "srvmulpvlsk252_md",
"elasticsearch.server.component": "o.e.m.j.JvmGcMonitorService",
"elasticsearch.server.gc_overhead": "3449992",
"elasticsearch.server.gc.collection_duration.ms": 1600.0,
"elasticsearch.server.gc.observation_duration.ms": 1800.0,
"elasticsearch.server.gc.overhead_seq": "3449992",
"event.dataset": "elasticsearch.server",
"fileset.module": "elasticsearch",
"fileset.name": "server",
"input.type": "log",
"log.level": "WARN",
"message": "overhead, spent [1.6s] collecting in the last [1.8s]",
"message": "[2018-07-03T11:45:45,604][WARN ][o.e.m.j.JvmGcMonitorService] [srvmulpvlsk252_md] [gc][3449992] overhead, spent [1.6s] collecting in the last [1.8s]",
"offset": 10205,
"prospector.type": "log",
"service.name": "elasticsearch"
Expand Down Expand Up @@ -286,4 +288,4 @@
"prospector.type": "log",
"service.name": "elasticsearch"
}
]
]

0 comments on commit ec704cd

Please sign in to comment.