Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating index template for Kibana to configure index replicas #1323

Merged

Conversation

ewolinetz
Copy link
Contributor

Addresses #1315

@ewolinetz ewolinetz requested a review from lukas-vlcek August 24, 2018 18:20
@openshift-ci-robot openshift-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Aug 24, 2018
@richm
Copy link
Contributor

richm commented Aug 24, 2018

/retest

1 similar comment
@ewolinetz
Copy link
Contributor Author

/retest

"order": 0,
"settings": {
"index.number_of_replicas": $REPLICA_SHARDS,
"index.number_of_shards": PRIMARY_SHARDS
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing $ ? ($PRIMARY_SHARDS)

@lukas-vlcek
Copy link
Contributor

Left one comment.
Apart from this LGTM

@ewolinetz ewolinetz force-pushed the kibana_index_template branch 2 times, most recently from 43d84bb to 01f9327 Compare August 27, 2018 16:18
@ewolinetz
Copy link
Contributor Author

Squashed commits

@ewolinetz
Copy link
Contributor Author

/hold
looking on my local deployment I don't see that Kibana has the correct replica count even though our template is there with replicas 0

$ oc exec -c elasticsearch logging-es-data-master-n95jnq3h-1-tvqg4 -- es_util --query=_template/common.settings.kibana.template.json

{"common.settings.kibana.template.json":{"order":0,"template":".kibana*","settings":{"index":{"number_of_shards":"1","number_of_replicas":"0"}},"mappings":{},"aliases":{}}}


health status index                                                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana                                                         o0U2BGXbTt6U9LGLGaFJKw   1   1          1            0      3.2kb          3.2kb

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 28, 2018
@ewolinetz
Copy link
Contributor Author

@lukas-vlcek any thoughts? it appears that .kibana is still being created with 1 replica even though my index template says otherwise..

@lukas-vlcek
Copy link
Contributor

lukas-vlcek commented Aug 28, 2018

@ewolinetz let's check how exactly Kibana creates its indices. Can we locate particular line in the source code?
Also if we can get the Elasticsearch log which is related to .kibana index creation then this can explain how the index was created (which API was used and if any index templates were applied, all this info is in the ES log).

@ewolinetz
Copy link
Contributor Author

@lukas-vlcek

From the ES logs:

...
[2018-08-28T21:58:44,153][WARN ][c.f.s.c.PrivilegesEvaluator] .kibana does not exist in cluster metadata
[2018-08-28T21:58:44,308][INFO ][o.e.c.m.MetaDataCreateIndexService] [logging-es-data-master-2kezn13a] [.kibana] creating index, cause [api], templates [], shards [1]/[1], mappings [_de
fault_, index-pattern, server, visualization, search, timelion-sheet, config, dashboard, url]
Elasticsearch Version: 5.6.10
Search Guard Version: <unknown>
Reload config on all nodes
Auto-expand replicas disabled
/usr/share/elasticsearch/init
[2018-08-28 21:58:46,797][INFO ][container.run            ] Updating replica count to 0
/etc/elasticsearch /usr/share/elasticsearch/init
Search Guard Admin v5
Will connect to localhost:9300 ... done
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'log4j2.debug' to show Log4j2 internal initia
lization logging.
[2018-08-28T21:58:49,674][WARN ][c.f.s.a.BackendRegistry  ] Authentication finally failed for null
Elasticsearch Version: 5.6.10
Search Guard Version: <unknown>
[2018-08-28T21:58:54,593][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [logging-es-data-master-2kezn13a] updating number_of_replicas to [0] for indices [.searchguard]
Reload config on all nodes
Update number of replicas to 0 with result: true
/usr/share/elasticsearch/init
[2018-08-28 21:58:55,136][INFO ][container.run            ] Setting filters to allocate shard to this node only
{"acknowledged":true}[2018-08-28 21:58:55,428][INFO ][container.run            ] Adding index templates
[2018-08-28 21:58:55,679][INFO ][container.run            ] Create index template 'com.redhat.viaq-openshift-operations.template.json'
{"acknowledged":true}[2018-08-28 21:58:56,560][INFO ][container.run            ] Create index template 'com.redhat.viaq-openshift-orphaned.template.json'
{"acknowledged":true}[2018-08-28 21:58:57,508][INFO ][container.run            ] Create index template 'com.redhat.viaq-openshift-project.template.json'
{"acknowledged":true}[2018-08-28 21:58:58,129][INFO ][container.run            ] Create index template 'common.settings.kibana.template.json'
{"acknowledged":true}[2018-08-28 21:58:58,830][INFO ][container.run            ] Create index template 'common.settings.operations.orphaned.json'
{"acknowledged":true}[2018-08-28 21:58:59,392][INFO ][container.run            ] Create index template 'common.settings.operations.template.json'
{"acknowledged":true}[2018-08-28 21:59:00,078][INFO ][container.run            ] Create index template 'common.settings.project.template.json'
{"acknowledged":true}[2018-08-28 21:59:00,858][INFO ][container.run            ] Create index template 'org.ovirt.viaq-collectd.template.json'
{"acknowledged":true}[2018-08-28 21:59:01,367][INFO ][container.run            ] Finished adding index templates
...

From /usr/share/kibana/src/core_plugins/elasticsearch/lib/create_kibana_index.js:

return callWithInternalUser('indices.create', {
    index: index,
    body: {
      settings: {
        number_of_shards: 1,
        'index.mapper.dynamic': false
      },
      mappings
    }
  }).catch(handleError('Unable to create Kibana index "<%= kibana.index %>"')).then(function () {
    return callWithInternalUser('cluster.health', {
      waitForStatus: 'yellow',
      index: index
    }).catch(handleError('Waiting for Kibana index "<%= kibana.index %>" to come online failed.'));
  });

@anpingli
Copy link

@ewolinetz What will happen if we delete the .kibana index manually. Is the new .kibana index replicas=0?

@lukas-vlcek
Copy link
Contributor

lukas-vlcek commented Aug 29, 2018

@ewolinetz the .kibana index is created with shards [1]/[1] (1 primary, 1 replica shards) way before we are putting our index templates into the cluster. I do not know where the number of replicas is configured in this case, however, if we just want to change the number of replicas for the testing case then we can simply change it on the fly using the index settings update API.

@ewolinetz
Copy link
Contributor Author

@lukas-vlcek unfortunately when i've tried that before the request is redirected to a kibana user index instead of changing the .kibana index.

@anpingli I'll try that and see. Hopefully it doesn't redirect it to a kibana user index as well...

@lukas-vlcek
Copy link
Contributor

@ewolinetz even when using admin certs?

@jcantrill
Copy link
Contributor

The multitenant code will attempt to modify anything that has Kibana in the request and a bearer token is provided. This is the source of a known issue where we are unable to delete index patterns. Maybe we should consider allowing admin users who provide a special header to take a request as is with no modifications. I would expect the admin certs to already behave this way but maybe there is something we are modifying

@lukas-vlcek
Copy link
Contributor

lukas-vlcek commented Aug 29, 2018

FYI, the number_of_replicas: 1 value comes from the defaults. See https://github.com/elastic/elasticsearch/blob/master/docs/reference/indices/create-index.asciidoc#index-settings. Seems other people were running into YELLOW cluster issue in case of a single node cluster too.

May be we can accept the YELLOW cluster state when running particular test?

@ewolinetz ewolinetz force-pushed the kibana_index_template branch from 4cce24f to d1068b1 Compare August 29, 2018 16:05
@ewolinetz
Copy link
Contributor Author

I can delete the indices, and then I see them recreated correctly with 0 replicas.

[2018-08-29T15:48:02,816][INFO ][o.e.c.m.MetaDataDeleteIndexService] [logging-es-data-master-aw89bch5] [.kibana/XDC_dKYWSTyWCDJ0_9wISw] deleting index
[2018-08-29T15:48:09,983][WARN ][c.f.s.c.PrivilegesEvaluator] .kibana does not exist in cluster metadata
[2018-08-29T15:48:09,990][INFO ][o.e.c.m.MetaDataCreateIndexService] [logging-es-data-master-aw89bch5] [.kibana] creating index, cause [api], templates [common.settings.kibana.template.json], shards [1]/[0], mappings [_default_, index-pattern, server, visualization, search, timelion-sheet, config, dashboard, url]
[2018-08-29T15:48:10,015][INFO ][o.e.c.r.a.AllocationService] [logging-es-data-master-aw89bch5] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana][0]] ...]).

@ewolinetz
Copy link
Contributor Author

I think we are running into an issue where we are not ready (the index templates are still being seeded) and then ES is somehow allowing traffic through.

@ewolinetz
Copy link
Contributor Author

Confirmed that #1326 fixes the issue of the index being created before the index template is seeded

@ewolinetz
Copy link
Contributor Author

/hold

@ewolinetz ewolinetz force-pushed the kibana_index_template branch from 7e3b004 to 724762e Compare August 30, 2018 19:08
@openshift-ci-robot openshift-ci-robot added size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Aug 30, 2018
@ewolinetz
Copy link
Contributor Author

/hold cancel

@openshift-ci-robot openshift-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 30, 2018
Copy link
Contributor

@jcantrill jcantrill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Aug 30, 2018
@openshift-merge-robot openshift-merge-robot merged commit fb43723 into openshift:master Aug 30, 2018
@ewolinetz ewolinetz deleted the kibana_index_template branch July 10, 2019 17:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm Indicates that a PR is ready to be merged. release/3.11 size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants